LIO

The Linux SCSI Target Wiki

(Difference between revisions)
Jump to: navigation, search
m
m
 
(37 intermediate revisions not shown)
Line 21: Line 21:
| language              =
| language              =
| genre                  = SCSI target
| genre                  = SCSI target
-
| license                = GNU General Public License
+
| license                = {{GPLv2}}
| website                = {{RTS website}}
| website                = {{RTS website}}
}}
}}
{{Image|LIO Architecture.png|architecture overview.}}
{{Image|LIO Architecture.png|architecture overview.}}
-
{{Image|LIO iSCSI Architecture.png|{{Target}} iSCSI architecture diagram.}}
+
{{Image|LIO iSCSI Architecture.png|{{T}} iSCSI architecture diagram.}}
{{Image|Portal-Groups.jpg|SCSI Portal Group and multipath architecture overview.}}
{{Image|Portal-Groups.jpg|SCSI Portal Group and multipath architecture overview.}}
{{Image|SCSI Standards Architecture.jpg|SCSI standards diagram.}}
{{Image|SCSI Standards Architecture.jpg|SCSI standards diagram.}}
Line 34: Line 34:
{{Image|Linux-fireware-target-bootc-macosx.png|The IEEE Firewire driver exporting IBLOCK LUNs to a MacOS-X based IEEE-1394/FireWire client.}}
{{Image|Linux-fireware-target-bootc-macosx.png|The IEEE Firewire driver exporting IBLOCK LUNs to a MacOS-X based IEEE-1394/FireWire client.}}
-
The '''{{Target}}''' has defined the Linux block storage standard since kernel version {{RTS releases|LIO|kernel_ver}}.<ref name=h-online>{{cite web| url=http://www.h-online.com/open/features/Kernel-Log-Coming-in-2-6-38-Part-4-Storage-1199926.html| title=Kernel Log: Coming in 2.6.38 (Part 4) - Storage| author=Thorsten Leemhuis| publisher=Heise Online| date=2011-03-02}}</ref><ref name=lwn>{{cite web| url=http://lwn.net/SubscriberLink/420691/eb6ae8a12222aac6/| title=Shooting at SCSI targets| author=Jonathan Corbet| date=2010-12-22| publisher=lnw.net}}</ref> It supports a rapidly growing number of [[#Fabric modules|fabric modules]], and all existing Linux block devices as [[#Backstores|backstores]].
+
'''{{Target}}''' ({{T}}) has been the Linux SCSI target since kernel version {{RTS releases|LIO|kernel_ver}}.<ref name=h-online>{{cite web| url=http://www.h-online.com/open/features/Kernel-Log-Coming-in-2-6-38-Part-4-Storage-1199926.html| title=Kernel Log: Coming in 2.6.38 (Part 4) - Storage| author=Thorsten Leemhuis| publisher=Heise Online| date=2011-03-02}}</ref><ref name=lwn>{{cite web| url=http://lwn.net/SubscriberLink/420691/eb6ae8a12222aac6/| title=Shooting at SCSI targets| author=Jonathan Corbet| date=2010-12-22| publisher=lnw.net}}</ref> It supports a rapidly growing number of [[#Fabric modules|fabric modules]], and all existing Linux block devices as [[#Backstores|backstores]].
== Overview ==
== Overview ==
-
The {{Target}} is based on a SCSI engine that implements the semantics of a SCSI target as described in the SCSI Architecture Model (SAM), and supports its comprehensive [[SPC-3]]/[[SPC-4]] feature set in a fabric-agnostic way. The SCSI target core does not directly communicate with initiators and it does not directly access data on disk.
+
{{Target}} is based on a SCSI engine that implements the semantics of a SCSI target as described in the SCSI Architecture Model (SAM), and supports its comprehensive [[SPC-3]]/[[SPC-4]] feature set in a fabric-agnostic way. The SCSI target core does not directly communicate with initiators and it does not directly access data on disk.
-
{{LIO}} has obtained [[VMware vSphere]]&nbsp;4 Ready certification on Buffalo [[http://www.risingtidesystems.com/doc/VMware_HCL_Buffalo.pdf PDF]] and QNAP [[http://www.risingtidesystems.com/doc/VMware_HCL_QNAP.pdf PDF]] systems, and vSphere&nbsp;5 Ready certification on Netgear [[http://www.risingtidesystems.com/doc/VMware_HCL_Netgear.pdf PDF]] and Pure Storage [[http://www.risingtidesystems.com/doc/VMware_HCL_PureStorage.pdf PDF]] systems.
+
{{T}} has obtained [[VMware vSphere]]&nbsp;4 Ready certification on Buffalo [[http://www.risingtidesystems.com/doc/VMware_HCL_Buffalo.pdf PDF]] and QNAP [[http://www.risingtidesystems.com/doc/VMware_HCL_QNAP.pdf PDF]] systems, and vSphere&nbsp;5 Ready certification on Netgear [[http://www.risingtidesystems.com/doc/VMware_HCL_Netgear.pdf PDF]] and Pure Storage [[http://www.risingtidesystems.com/doc/VMware_HCL_PureStorage.pdf PDF]] systems.
-
Native support in OpenStack ([https://wiki.openstack.org/wiki/Cinder/LIO-Grizzly setup], [https://review.openstack.org/#/c/18274/ code]), starting with the Grizzly release, makes {{LIO}} also an attractive storage option for cloud deployments.
+
Native support in OpenStack ([https://wiki.openstack.org/wiki/Cinder/LIO-Grizzly setup], [https://review.openstack.org/#/c/18274/ code]), starting with the Grizzly release, makes {{T}} also an attractive storage option for cloud deployments.
-
The {{Target}} core ({{RTS releases|LIO|module_repo}}, see {{RTS releases|LIO|module_info}}) was released with the Linux kernel {{RTS releases|LIO|kernel_ver}} on {{RTS releases|LIO|initial_date}}.<ref>{{RTS releases|LIO|kernel_rel}}</ref>
+
The {{T}} core ({{RTS releases|LIO|module_repo}}, see {{RTS releases|LIO|module_info}}) was released with Linux kernel {{RTS releases|LIO|kernel_ver}} on {{RTS releases|LIO|initial_date}}.<ref>{{RTS releases|LIO|kernel_rel}}</ref>
== Setup ==
== Setup ==
-
''[[targetcli]]'' provides a comprehensive, powerful and easy CLI tool to configure and manage {{Target}}s. ''targetcli'' was developed by {{RTS full}}.
+
''[[targetcli]]'' provides a comprehensive, powerful and easy CLI tool to configure and manage {{T}}. ''targetcli'' was developed by {{C}}.
== {{anchor|Fabric modules}}Fabric modules ==
== {{anchor|Fabric modules}}Fabric modules ==
Line 67: Line 67:
== {{anchor|Backstores}} Backstores ==
== {{anchor|Backstores}} Backstores ==
-
Backstores implement the "backend" of the {{Target}}. They implement the methods of accessing data on disk. A backstore subsystem plugin is a physical storage object that provides the block device underlying a SCSI [[#Endpoint|Endpoint]].
+
Backstores implement the {{T}} "backend". They implement the methods of accessing data on disk. A backstore subsystem plugin is a physical storage object that provides the block device underlying a SCSI [[#Endpoint|Endpoint]].
-
Backstore objects can be added via the ''Storage Hardware Abstraction Layer'' (''S-HAL'') that brings storage hardware into the {{Target}} engine as raw block devices, on which the Linux stack just works (including complex functionality such as software RAID, LVM, snapshots, virtualization, etc.).
+
Backstore objects can be added via the ''Storage Hardware Abstraction Layer'' (''S-HAL'') that brings storage hardware into {{T}} engine as raw block devices, on which the Linux stack just works (including complex functionality such as software RAID, LVM, snapshots, virtualization, etc.).
-
The {{Target}} supports the [[SCSI-3]] standard for all backstore devices (block devices and/or VFS):
+
{{T}} supports the [[SCSI-3]] standard for all backstore devices (block devices and/or VFS):
 +
 
 +
* {{anchor|FILEIO}} '''FILEIO''' (Linux VFS devices): any file on a mounted filesystem. It may be backed by a file or an underlying real block device. FILEIO is using struct file to serve block I/O with various methods (synchronous or asynchronous) and (buffered or direct). The Linux kernel code for filesystems resides in ''linux/fs''. By default, [[FILEIO]] uses non-buffered mode (<code>O_SYNC</code> set).
 +
 
 +
{{Ambox| type=warning| head=Do not use buffered FILEIO| text=Creating a FILEIO backend with ''buffered=True'' enables the buffer cache. While this can provide significant performance increases, it also creates a serious data integrity hazard: If the system crashes for any reason, an unflushed buffer cache can cause the entire backstore to be irrecoverably corrupted.}}
-
* {{anchor|FILEIO}} '''FILEIO''' (Linux VFS devices): any file on a mounted filesystem. It may be backed by a file or an underlying real block device. FILEIO is using struct file to serve block I/O with various methods (synchronous or asynchronous) and (buffered or direct). The Linux kernel code for filesystems resides in ''linux/fs''. By default, [[FILEIO]] uses <code>O_SYNC</code>.
 
* {{anchor|IBLOCK}} '''IBLOCK''' (Linux BLOCK devices): any block device that appears in ''/sys/block''. The Linux kernel code for block devices is in ''linux/block''.
* {{anchor|IBLOCK}} '''IBLOCK''' (Linux BLOCK devices): any block device that appears in ''/sys/block''. The Linux kernel code for block devices is in ''linux/block''.
Line 84: Line 87:
* {{anchor|RAMDISK|RAMDISK_MCP|RD_MCP}} '''Memory Copy RAMDISK''' (Linux RAMDISK_MCP): Memory Copy ram disks (''rd_mcp'') provide ram disks with full SCSI emulation and separate memory mappings using memory copy for initiators, thus providing multi-session capability. This is most useful for fast volatile mass storage for production.
* {{anchor|RAMDISK|RAMDISK_MCP|RD_MCP}} '''Memory Copy RAMDISK''' (Linux RAMDISK_MCP): Memory Copy ram disks (''rd_mcp'') provide ram disks with full SCSI emulation and separate memory mappings using memory copy for initiators, thus providing multi-session capability. This is most useful for fast volatile mass storage for production.
-
The SCSI functionality is implemented directly in the target engine in a fabric-agnostic way, including a number of high-end features, such as [[Persistent Reservations]] (PRs) and [[Asymmetric Logical Unit Assignment]] (ALUA), following the [[SPC-4]] standard.
+
SCSI functionality is implemented directly in the {{T}} engine in a fabric-agnostic way, including a number of high-end features, such as [[Persistent Reservations]] (PRs), [[Asymmetric Logical Unit Assignment]] (ALUA), [[vStorage APIs for Array Integration]] (VAAI) following the [[SPC-4]] standard.
The backstore devices ([[FILEIO]], [[IBLOCK]], [[pSCSI]], [[RAMDISK]], etc.) report the underlying HW limitiations for things like TCQ depth, ''MaxSectors'', ''TaskAbortedStatus'', ''UA Interlocking'', etc. All of these values are available as attributes in the ''[[targetcli]]'' device context.
The backstore devices ([[FILEIO]], [[IBLOCK]], [[pSCSI]], [[RAMDISK]], etc.) report the underlying HW limitiations for things like TCQ depth, ''MaxSectors'', ''TaskAbortedStatus'', ''UA Interlocking'', etc. All of these values are available as attributes in the ''[[targetcli]]'' device context.
Line 112: Line 115:
* {{anchor|CNA}} '''Converged Network Adapter''' ('''CNA'''): An Ethernet PCIe network adapter ([[NIC]]) that natively supports [[RDMA]] (via [[RoCE]]), also called [[RNIC]].
* {{anchor|CNA}} '''Converged Network Adapter''' ('''CNA'''): An Ethernet PCIe network adapter ([[NIC]]) that natively supports [[RDMA]] (via [[RoCE]]), also called [[RNIC]].
* {{anchor|Demo Mode}} '''Demo Mode''': Means disabling authentification for an iSCSI Endpoint, i.e. its [[ACL]]s are diabled. Demo Mode grants read-only access to all iSCSI Initiators that attempt to connect to that specific Endpoint. See the [[iSCSI]] entry on how to enable [[iSCSI#Demo mode|Demo Mode]].
* {{anchor|Demo Mode}} '''Demo Mode''': Means disabling authentification for an iSCSI Endpoint, i.e. its [[ACL]]s are diabled. Demo Mode grants read-only access to all iSCSI Initiators that attempt to connect to that specific Endpoint. See the [[iSCSI]] entry on how to enable [[iSCSI#Demo mode|Demo Mode]].
 +
* {{anchor|DIF}} '''Data Integrity Field''' ('''DIF'''): Is an approach to protect data integrity in computer data storage. It was proposed in 2003 by the T10 committee of the International Committee for Information Technology Standards.
* {{anchor|EUI}} '''Extended Unique Identifier''' ('''EUI'''): A 64-bit number that uniquely identifies every device in the world. The format consists of 24 bits that are unique to a given company, and 40 bits assigned by the company to each device it builds.
* {{anchor|EUI}} '''Extended Unique Identifier''' ('''EUI'''): A 64-bit number that uniquely identifies every device in the world. The format consists of 24 bits that are unique to a given company, and 40 bits assigned by the company to each device it builds.
-
* {{anchor|I_T Nexus}} '''I_T Nexus''': An I_T Nexus denotes a live session between an [[Initiator]] and a Target.
+
* {{anchor|I_T Nexus}} '''I_T Nexus''': An I_T Nexus denotes a live session between an [[Initiator]] and a target.
* {{anchor|Initiator}} '''[[Initiator]]''': The originating end of a SCSI session. Typically a controlling device such as a computer.
* {{anchor|Initiator}} '''[[Initiator]]''': The originating end of a SCSI session. Typically a controlling device such as a computer.
* {{anchor|IPS}} '''Internet Protocol Storage''' ('''IPS'''): The class of protocols or devices that use the IP protocol to move data in a storage network. FCIP, iFCP, and [[iSCSI]] are all examples of IPS protocols.
* {{anchor|IPS}} '''Internet Protocol Storage''' ('''IPS'''): The class of protocols or devices that use the IP protocol to move data in a storage network. FCIP, iFCP, and [[iSCSI]] are all examples of IPS protocols.
* {{anchor|IQN}} '''iSCSI Qualified Name''' ('''IQN'''): A name format for iSCSI that uniquely identifies every device in the world (e.g. iqn.5886.com.acme.tapedrive.sn-a12345678).
* {{anchor|IQN}} '''iSCSI Qualified Name''' ('''IQN'''): A name format for iSCSI that uniquely identifies every device in the world (e.g. iqn.5886.com.acme.tapedrive.sn-a12345678).
-
* {{anchor|ISID}} '''Initiator Session Identifier''' ('''ISID'''): A 48-bit number, generated by the Initiator, that uniquely identifies a session between the Initiator and the Target. This value is created during the login process, and is sent to the Target with a Login PDU.
+
* {{anchor|ISID}} '''Initiator Session Identifier''' ('''ISID'''): A 48-bit number, generated by the Initiator, that uniquely identifies a session between the Initiator and the target. This value is created during the login process, and is sent to the target with a Login PDU.
* {{anchor|MPIO}} '''Multipath I/O''' ('''MPIO'''): A method by which data can take multiple redundant paths between a server and storage.
* {{anchor|MPIO}} '''Multipath I/O''' ('''MPIO'''): A method by which data can take multiple redundant paths between a server and storage.
* {{anchor|Network Portal}} '''Network Portal''': The combination of an iSCSI Endpoint with an IP address plus a TCP port. The TCP port number for the iSCSI protocol defined by IANA is 3260.
* {{anchor|Network Portal}} '''Network Portal''': The combination of an iSCSI Endpoint with an IP address plus a TCP port. The TCP port number for the iSCSI protocol defined by IANA is 3260.
* {{anchor|NIC}} '''Network Interface Card''' ('''NIC'''): An Ethernet PCIe network adapter.
* {{anchor|NIC}} '''Network Interface Card''' ('''NIC'''): An Ethernet PCIe network adapter.
 +
* {{anchor|NPIV}} '''N_Port ID Virtualization''' ('''NPIV'''): is a [[Fibre Channel]] facility allowing multiple N_Port IDs to share a single physical N_Port. This allows multiple Fibre Channel initiators to occupy a single physical port, easing hardware requirements in Storage Area Network design, especially where virtual SANs are called for. NPIV is defined by the Technical Committee T11 in the Fibre Channel - Link Services (FC-LS) specification.
* {{anchor|NTB}} '''Non-Transparent Bridging''' ('''NTB'''): Non-transparent bridges in PCI systems support intelligent adapters in enterprise systems and multiple processors in embedded systems. The Intel DrawBridge established the paradigm of the embedded bridge and became a defacto standard in such environments as Compact PCI and intelligent adapters for enterprise systems. In these systems, the non-transparent bridge functions as a gateway between the local subsystem and the backplane.<ref>{{cite book |url=http://download.intel.com/design/intarch/papers/323328.pdf |title=Intel® Xeon® Processor C5500/C3500 Series Non-Transparent Bridge |series=323328-001 |author=Mark J. Sullivan |publisher=Intel |location=Santa Clara |date=January 2010}}</ref>
* {{anchor|NTB}} '''Non-Transparent Bridging''' ('''NTB'''): Non-transparent bridges in PCI systems support intelligent adapters in enterprise systems and multiple processors in embedded systems. The Intel DrawBridge established the paradigm of the embedded bridge and became a defacto standard in such environments as Compact PCI and intelligent adapters for enterprise systems. In these systems, the non-transparent bridge functions as a gateway between the local subsystem and the backplane.<ref>{{cite book |url=http://download.intel.com/design/intarch/papers/323328.pdf |title=Intel® Xeon® Processor C5500/C3500 Series Non-Transparent Bridge |series=323328-001 |author=Mark J. Sullivan |publisher=Intel |location=Santa Clara |date=January 2010}}</ref>
* {{anchor|OUI}} '''Organizationally Unique Identifier''' ('''OUI''') is a 24-bit number that is purchased from the IEEE Registration Authority. This identifier uniquely identifies a vendor, manufacturer, or other organization (referred to by the IEEE as the “assignee”) globally or worldwide and effectively reserves a block of each possible type of derivative identifier (such as MAC addresses, group addresses, Subnetwork Access Protocol protocol identifiers, etc.) for the exclusive use of the assignee, see [http://en.wikipedia.org/wiki/Organizationally_Unique_Identifier OUI Wikipedia entry]. The OUI is subsequently used by the assignee to create particular instances of these identifiers for various purposes, such as the identification of a particular piece of equipment.
* {{anchor|OUI}} '''Organizationally Unique Identifier''' ('''OUI''') is a 24-bit number that is purchased from the IEEE Registration Authority. This identifier uniquely identifies a vendor, manufacturer, or other organization (referred to by the IEEE as the “assignee”) globally or worldwide and effectively reserves a block of each possible type of derivative identifier (such as MAC addresses, group addresses, Subnetwork Access Protocol protocol identifiers, etc.) for the exclusive use of the assignee, see [http://en.wikipedia.org/wiki/Organizationally_Unique_Identifier OUI Wikipedia entry]. The OUI is subsequently used by the assignee to create particular instances of these identifiers for various purposes, such as the identification of a particular piece of equipment.
Line 137: Line 142:
== Inclusion in Linux distributions ==
== Inclusion in Linux distributions ==
-
The {{Target}} and ''[[targetcli]]'' are included in most Linux distributions per default. Here is an overview over the most popular distributions:
+
{{T}} and ''[[targetcli]]'' are included in most Linux distributions per default. Here is an overview over the most popular distributions:
{{Linux Inclusion}}
{{Linux Inclusion}}
Line 143: Line 148:
== Timeline ==
== Timeline ==
-
The {{Target}} and fabric modules have gone upstream into the Linux kernel as follows:
+
{{T}} and fabric modules have gone upstream into the Linux kernel as follows:
-
* Linux 2.6.38 (2011-03-14<ref>{{cite web| url=https://lkml.org/lkml/2011/3/14/508| title=Linux 2.6.38| author=Linus Torvalds| date=2011-03-14| publisher=lkml.org}}</ref>): {{Target}} engine<ref>{{cite web| url=http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=38567333a6dabd0f2b4150e9fb6dd8e3ba2985e5| title=Target merge| author=Linus Torvalds| date=2011-01-14| publisher=lkml.org}}</ref>
+
* Linux 2.6.38 (2011-03-14<ref>{{cite web| url=https://lkml.org/lkml/2011/3/14/508| title=Linux 2.6.38| author=Linus Torvalds| date=2011-03-14| publisher=lkml.org}}</ref>): {{T}} [[SCSI]] target core<ref>{{cite web| url=http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=38567333a6dabd0f2b4150e9fb6dd8e3ba2985e5| title=Target merge| author=Linus Torvalds| date=2011-01-14| publisher=lkml.org}}</ref>
* Linux 2.6.39 (2011-05-18<ref>{{cite web| url=https://lkml.org/lkml/2011/5/19/16| title=Linux 2.6.39| author=Linus Torvalds| date=2011-05-18| publisher=lkml.org}}</ref>): [[tcm_loop]] (SCSI support on top of any raw hardware)
* Linux 2.6.39 (2011-05-18<ref>{{cite web| url=https://lkml.org/lkml/2011/5/19/16| title=Linux 2.6.39| author=Linus Torvalds| date=2011-05-18| publisher=lkml.org}}</ref>): [[tcm_loop]] (SCSI support on top of any raw hardware)
* Linux 3.0 (2011-07-21<ref>{{cite web| url=https://lkml.org/lkml/2011/7/21/455| title=Linux 3.0| author=Linus Torvalds| date=2011-07-21| publisher=lkml.org}}</ref>): [[Fibre Channel over Ethernet|FCoE]] (by Cisco)
* Linux 3.0 (2011-07-21<ref>{{cite web| url=https://lkml.org/lkml/2011/7/21/455| title=Linux 3.0| author=Linus Torvalds| date=2011-07-21| publisher=lkml.org}}</ref>): [[Fibre Channel over Ethernet|FCoE]] (by Cisco)
Line 154: Line 159:
* Linux 3.9 (2013-04-28<ref>{{cite web| url=https://lkml.org/lkml/2013/4/28/69| title=Linux 3.9 released| author=Linus Torvalds| date=2013-04-28| publisher=lkml.org}}</ref>): [[Fibre Channel|16 GFC]] (QLogic HBAs)
* Linux 3.9 (2013-04-28<ref>{{cite web| url=https://lkml.org/lkml/2013/4/28/69| title=Linux 3.9 released| author=Linus Torvalds| date=2013-04-28| publisher=lkml.org}}</ref>): [[Fibre Channel|16 GFC]] (QLogic HBAs)
* Linux 3.10 (2013-06-30<ref>{{cite web| url=http://marc.info/?l=linux-kernel&m=137263382100690&w=2| title=Linux 3.10| author=Linus Torvalds| date=2013-06-30| publisher=marc.org}}</ref>): [[InfiniBand]]/[[iSER]] (Mellanox HCAs and CNAs)
* Linux 3.10 (2013-06-30<ref>{{cite web| url=http://marc.info/?l=linux-kernel&m=137263382100690&w=2| title=Linux 3.10| author=Linus Torvalds| date=2013-06-30| publisher=marc.org}}</ref>): [[InfiniBand]]/[[iSER]] (Mellanox HCAs and CNAs)
-
* Linux 3.12 (planned): [[VAAI]]
+
* Linux 3.12 (2013-11-03<ref>{{cite web| url=https://lkml.org/lkml/2013/11/3/160| title=Linux 3.12 released..| author=Linus Torvalds| date=2013-11-03| publisher=lkml.org}}</ref>): [[VAAI]]
 +
* Linux 3.14 (2014-03-30<ref>{{cite web| url=http://kernelnewbies.org/Linux_3.14| title=Linux 3.14| date=2014-03-30| publisher=kernelnewbies.org}}</ref>): T10 [[DIF]] core, T10 Referrals, [[NPIV]]
 +
* Linux 3.15 (2014-06-08<ref>{{cite web| url=http://kernelnewbies.org/Linux_3.15| title=Linux 3.15| date=2014-06-08| publisher=kernelnewbies.org}}</ref>): T10 DIF iSER
 +
* Linux 3.16 (2014-08-03<ref>{{cite web| url=http://kernelnewbies.org/Linux_3.16| title=Linux 3.16| date=2014-08-03| publisher=kernelnewbies.org}}</ref>): Qlogic T10 DIF, vhost T10 DIF
 +
* Linux 3.17 (2014-10-05<ref>{{cite web| url=http://kernelnewbies.org/Linux_3.17| title=Linux 3.17| date=2014-10-05| publisher=kernelnewbies.org}}</ref>): Userspace backend (TCMU), Xen host paravirtualized driver (xen-back)
 +
* Linux 3.20 (planned): Update vhost-scsi to support virtio v1.0 specification requirements
 +
* Linux 3.22 (planned): NVMe-OF reference target driver
== Wikipedia ==
== Wikipedia ==
Line 164: Line 175:
== Contact ==
== Contact ==
-
Please direct all technical discussion on ''targetcli'' to the [http://www.spinics.net/lists/target-devel/ target-devel] mailing list ([mailto:target-devel@vger.kernel.org post], [mailto:majordomo@vger.kernel.org&body=subscribe%20target-devel subscribe], [http://vger.kernel.org/vger-lists.html#target-devel list info], [http://dir.gmane.org/gmane.linux.scsi.target.devel gmane archive]).
+
Please direct all technical discussion on ''targetcli'' to the [http://www.spinics.net/lists/target-devel/ target-devel] mailing list ([mailto:target-devel@vger.kernel.org post], [mailto:majordomo@vger.kernel.org&body=subscribe%20target-devel subscribe], [http://vger.kernel.org/vger-lists.html#target-devel list info], [http://dir.gmane.org/gmane.linux.scsi.target.devel gmane archive]). Please see [[Support]] for more information.
-
 
+
-
If you need assistance, or if you have an SLA with {{RTS full}}, please contact the company at: {{RTS contact|*|support}}
+
-
 
+
-
Please see [[Support]] for more information.
+
== See also ==
== See also ==
-
* {{OS}}, [[targetcli]]
+
* [[{{OS}}]], [[targetcli]]
* Fabric modules: [[FCoE]], [[Fibre Channel]], [[IBM vSCSI]], [[iSCSI]], [[iSER]], [[SRP]], [[vHost]]
* Fabric modules: [[FCoE]], [[Fibre Channel]], [[IBM vSCSI]], [[iSCSI]], [[iSER]], [[SRP]], [[vHost]]
* Distributions: [[RHEL]] 4/5/6, SLES11, [[CentOS]], [[Debian]], [[Fedora]], [[openSUSE]], [[Ubuntu]], etc.
* Distributions: [[RHEL]] 4/5/6, SLES11, [[CentOS]], [[Debian]], [[Fedora]], [[openSUSE]], [[Ubuntu]], etc.
Line 185: Line 192:
== External links ==
== External links ==
-
* {{Target}} Wikipedia entries: [http://de.wikipedia.org/wiki/LIO German] [http://en.wikipedia.org/wiki/LIO_Target English]
+
* {{T}} Wikipedia entries: [http://de.wikipedia.org/wiki/LIO German] [http://en.wikipedia.org/wiki/LIO_Target English]
-
* {{OS Admin Manual}}
+
* {{LIO Admin Manual}}
* RTSlib Reference Guide {{Lib Ref Guide HTML}}{{Lib Ref Guide PDF}}
* RTSlib Reference Guide {{Lib Ref Guide HTML}}{{Lib Ref Guide PDF}}
* [http://www.scsi4me.com/scsi-connectors.htm Adapters by SCSI connector type]
* [http://www.scsi4me.com/scsi-connectors.htm Adapters by SCSI connector type]
Line 203: Line 210:
[[Category:T10]]
[[Category:T10]]
-
[[Category:Target]]
+
[[Category:LIO]]
[[Category:SCSI]]
[[Category:SCSI]]
__NOEDITSECTION__
__NOEDITSECTION__

Latest revision as of 02:17, 12 March 2016

LinuxIO
Logo
LIO 150513.png
LinuxIO
Original author(s) Nicholas Bellinger
Developer(s) Datera, Inc.
Initial release January 14, 2011 (2011-01-14)
Stable release 4.1.0 / June 20, 2012;
7 years ago
 (2012-06-20)
Preview release 4.2.0-rc5 / June 28, 2012;
7 years ago
 (2012-06-28)
Development status Production
Written in C
Operating system Linux
Type SCSI target
License GNU General Public License, version 2 (GPLv2)
Website datera.io
architecture overview.
LIO iSCSI architecture diagram.
SCSI Portal Group and multipath architecture overview.
SCSI standards diagram.
QLogic Fibre Channel running at line rate in target mode with PCIe device passthrough and MSI-X polled interrupts across Linux/SCSI qla2xxx LLD request and response rings.
The IEEE Firewire driver exporting IBLOCK LUNs to a MacOS-X based IEEE-1394/FireWire client.

LinuxIO (LIO) has been the Linux SCSI target since kernel version 2.6.38.[1][2] It supports a rapidly growing number of fabric modules, and all existing Linux block devices as backstores.

Contents

Overview

LinuxIO is based on a SCSI engine that implements the semantics of a SCSI target as described in the SCSI Architecture Model (SAM), and supports its comprehensive SPC-3/SPC-4 feature set in a fabric-agnostic way. The SCSI target core does not directly communicate with initiators and it does not directly access data on disk.

LIO has obtained VMware vSphere 4 Ready certification on Buffalo [PDF] and QNAP [PDF] systems, and vSphere 5 Ready certification on Netgear [PDF] and Pure Storage [PDF] systems.

Native support in OpenStack (setup, code), starting with the Grizzly release, makes LIO also an attractive storage option for cloud deployments.

The LIO core (target_core.ko, see Linux kernel driver database) was released with Linux kernel 2.6.38 on January 14, 2011 (2011-01-14).[3]

Setup

targetcli provides a comprehensive, powerful and easy CLI tool to configure and manage LIO. targetcli was developed by Datera.

Fabric modules

Fabric modules implement the "frontend" of the SCSI target. They "speak" specific protocols that transport SCSI commands. The Fabric Hardware Abstraction Layer (F-HAL) allows all protocol-specific processing to be encapsulated in fabric modules. The following fabric modules are available:

Backstores

Backstores implement the LIO "backend". They implement the methods of accessing data on disk. A backstore subsystem plugin is a physical storage object that provides the block device underlying a SCSI Endpoint.

Backstore objects can be added via the Storage Hardware Abstraction Layer (S-HAL) that brings storage hardware into LIO engine as raw block devices, on which the Linux stack just works (including complex functionality such as software RAID, LVM, snapshots, virtualization, etc.).

LIO supports the SCSI-3 standard for all backstore devices (block devices and/or VFS):

SCSI functionality is implemented directly in the LIO engine in a fabric-agnostic way, including a number of high-end features, such as Persistent Reservations (PRs), Asymmetric Logical Unit Assignment (ALUA), vStorage APIs for Array Integration (VAAI) following the SPC-4 standard.

The backstore devices (FILEIO, IBLOCK, pSCSI, RAMDISK, etc.) report the underlying HW limitiations for things like TCQ depth, MaxSectors, TaskAbortedStatus, UA Interlocking, etc. All of these values are available as attributes in the targetcli device context.

Specifications

The following specifications are available as T10 Working Drafts:

Glossary

Inclusion in Linux distributions

LIO and targetcli are included in most Linux distributions per default. Here is an overview over the most popular distributions:

Distribution Version[Linux 1] Release Archive Install Source git[Linux 2] Documentation
CentOS 6.2 2011-12-20 CentOS mirror su -c 'yum install fcoe-target-utils' targetcli-fb.git Tech Notes
Debian 7.0 ("wheezy") TBA Debian pool su -c 'apt-get install targetcli' targetcli
Fedora 16, 17/18 2011-11-08 Fedora Rawhide su -c 'yum install targetcli' targetcli-fb.git Target Wiki
openSUSE 12.1 2011-11-08 Requires manual installation from targetcli.
RHEL 6.2 2011-11-16 Fedora Rawhide su -c 'yum install fcoe-target-utils' targetcli-fb.git Tech Notes
Scientific Linux 6.2 2012-02-16 SL Mirror su -c 'yum install fcoe-target-utils' targetcli-fb.git Tech Notes
SLES SP2 2012-02-15 Requires manual installation from targetcli.
Ubuntu PrecisePangolin v12 2012-04-26 Ubuntu universe su -c 'apt-get install targetcli' targetcli
  1. The distribution release where LIO was included first.
  2. Technical support, and qualified backports to other kernels and distributions are available from Datera.

Timeline

LIO and fabric modules have gone upstream into the Linux kernel as follows:

Wikipedia

Contact

Please direct all technical discussion on targetcli to the target-devel mailing list (post, subscribe, list info, gmane archive). Please see Support for more information.

See also

Notes

  1. Thorsten Leemhuis (2011-03-02). "Kernel Log: Coming in 2.6.38 (Part 4) - Storage". Heise Online. 
  2. Jonathan Corbet (2010-12-22). "Shooting at SCSI targets". lnw.net. 
  3. Linus Torvalds (2011-03-14). "Linux 2.6.38". lkml.org. 
  4. Mark J. Sullivan (January 2010). Intel® Xeon® Processor C5500/C3500 Series Non-Transparent Bridge. 323328-001. Santa Clara: Intel. 
  5. Linus Torvalds (2011-03-14). "Linux 2.6.38". lkml.org. 
  6. Linus Torvalds (2011-01-14). "Target merge". lkml.org. 
  7. Linus Torvalds (2011-05-18). "Linux 2.6.39". lkml.org. 
  8. Linus Torvalds (2011-07-21). "Linux 3.0". lkml.org. 
  9. Linus Torvalds (2011-10-24). "Linux 3.1". lkml.org. 
  10. Linus Torvalds (2011-07-27). "iSCSI merge". lkml.org. 
  11. Linus Torvalds (2012-03-18). "Linux 3.3 release". lkml.org. 
  12. Linus Torvalds (2012-01-18). "InfiniBand/SRP merge". lkml.org. 
  13. Linus Torvalds (2012-07-21). "Linux 3.5 released". marc.info. 
  14. Linus Torvalds (2012-05-31). "scsi-misc". lkml.org. 
  15. Linus Torvalds (2012-05-22). "usb-target-merge". lkml.org. 
  16. Linus Torvalds (2012-05-23). "sbp-target-merge". lkml.org. 
  17. Linus Torvalds (2012-10-01). "Linux 3.6". lkml.org. 
  18. Linus Torvalds (2012-08-13). "tcm_vhost: Initial merge for vhost level target fabric driver". lkml.org. 
  19. Linus Torvalds (2013-04-28). "Linux 3.9 released". lkml.org. 
  20. Linus Torvalds (2013-06-30). "Linux 3.10". marc.org. 
  21. Linus Torvalds (2013-11-03). "Linux 3.12 released..". lkml.org. 
  22. "Linux 3.14". kernelnewbies.org. 2014-03-30. 
  23. "Linux 3.15". kernelnewbies.org. 2014-06-08. 
  24. "Linux 3.16". kernelnewbies.org. 2014-08-03. 
  25. "Linux 3.17". kernelnewbies.org. 2014-10-05. 

External links

Timeline of the LinuxIO
Release Details 2011 2012 2013 2014 2015
123456789101112 123456789101112 123456789101112 123456789101112 123456789101112
4.x Version 4.0 4.1
Feature LIO Core Loop back FCoE iSCSI Perf SRP
CM WQ FC
USB
1394
vHost Perf Misc 16 GFC iSER Misc VAAI Misc DIF Core
NPIV
DIF iSER DIF FC vhost TCMU Xen Misc Misc virtio 1.0 Misc NVMe OF
Linux 2.6.38 2.6.39 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22
Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox
Google AdSense