Main Page

The Linux SCSI Target Wiki

(Difference between revisions)
Jump to: navigation, search
m
m
Line 33: Line 33:
'''{{Target}}''' ('''{{LIO servername}}''') is the standard open-source [[SCSI]] target for shared data storage in Linux. It supports all prevalent storage fabrics, including [[Fibre Channel]] ([http://www.qlogic.com QLogic]), [[FCoE]], iEEE 1394, [[iSCSI]], [[iSER]] ([http://www.mellanox.com/ Mellanox] [[InfiniBand]]), [[SRP]] (Mellanox InfiniBand), USB, [[vHost]], etc.
'''{{Target}}''' ('''{{LIO servername}}''') is the standard open-source [[SCSI]] target for shared data storage in Linux. It supports all prevalent storage fabrics, including [[Fibre Channel]] ([http://www.qlogic.com QLogic]), [[FCoE]], iEEE 1394, [[iSCSI]], [[iSER]] ([http://www.mellanox.com/ Mellanox] [[InfiniBand]]), [[SRP]] (Mellanox InfiniBand), USB, [[vHost]], etc.
-
The advanced feature set of the {{Target|1}} has made it the SCSI target of choice for many storage array vendors, for instance allowing them to achieve [[VMware]]® Ready certifications. Native support for the {{Target}} in QEMU/[[KVM]], libvirt, and OpenStack™ ([https://wiki.openstack.org/wiki/Cinder/LIO-Grizzly setup], [https://review.openstack.org/#/c/18274/ code]) makes it an attractive storage option for cloud deployments.
+
The advanced feature set of the {{Target|1}} has made it the SCSI target of choice for many storage array vendors, for instance allowing them to achieve [[VMware]]® Ready certifications. Native support for {{Target}} in QEMU/[[KVM]], libvirt, and OpenStack™ ([https://wiki.openstack.org/wiki/Cinder/LIO-Grizzly setup], [https://review.openstack.org/#/c/18274/ code]) makes it an attractive storage option for cloud deployments.
{{Target}} includes [[targetcli]], a management shell and API with a single namespace for all storage objects.
{{Target}} includes [[targetcli]], a management shell and API with a single namespace for all storage objects.
Line 75: Line 75:
'''Architecture'''
'''Architecture'''
-
The {{Target}} engine implements the generic SCSI semantics.  
+
{{Target}} engine implements the generic SCSI semantics.  
* High-performance, non-blocking, multithreaded architecture with SSE4.2 support and no single-point of failure
* High-performance, non-blocking, multithreaded architecture with SSE4.2 support and no single-point of failure
Line 93: Line 93:
<!-- Compatibility and certifications -->
<!-- Compatibility and certifications -->
-
{{Main Page Subbox|#cef2e0|#a3bfb1|Compatibility and certifications|The {{Target}} works with Initiators of the following operating systems:
+
{{Main Page Subbox|#cef2e0|#a3bfb1|Compatibility and certifications|{{Target}} works with Initiators of the following operating systems:
* Microsoft: Windows&reg; Server 2008/R2/2012 and Windows&reg; XP/Vista/7/8
* Microsoft: Windows&reg; Server 2008/R2/2012 and Windows&reg; XP/Vista/7/8
Line 101: Line 101:
* VMs: [[VMware&nbsp;vSphere|vSphere]]&trade;&nbsp;5, Red&nbsp;Hat [[Kernel-based Virtual Machine|KVM]], Microsoft Hyper-V, Oracle xVM/VirtualBox, Xen
* VMs: [[VMware&nbsp;vSphere|vSphere]]&trade;&nbsp;5, Red&nbsp;Hat [[Kernel-based Virtual Machine|KVM]], Microsoft Hyper-V, Oracle xVM/VirtualBox, Xen
-
The {{Target}} enables [[VMware]] Ready certifications (incl. [[VMware vSphere|vSphere]]&trade;&nbsp;5). It also passes the Microsoft [http://en.wikipedia.org/wiki/Windows_Server_2008 Windows&reg; Server 2008] / [http://en.wikipedia.org/wiki/Windows_Server_2008_R2 R2] [http://en.wikipedia.org/wiki/High-availability_cluster Failover Cluster] compatibility test suites.
+
{{Target}} enables [[VMware]] Ready certifications (incl. [[VMware vSphere|vSphere]]&trade;&nbsp;5). It also passes the Microsoft [http://en.wikipedia.org/wiki/Windows_Server_2008 Windows&reg; Server 2008] / [http://en.wikipedia.org/wiki/Windows_Server_2008_R2 R2] [http://en.wikipedia.org/wiki/High-availability_cluster Failover Cluster] compatibility test suites.
}}
}}
|}
|}
Line 111: Line 111:
<!-- targetcli -->
<!-- targetcli -->
-
{{Main Page Subbox|#cedff2|#a3b0bf|[[Targetcli]]|[[targetcli]] provides the fabric agnostic single-node management shell for {{Target}}s. targetcli aggregates and presents all SAN functionality via the RTSlib library and API [[http://www.risingtidesystems.com/doc/rtslib-gpl/html HTML]][[http://www.risingtidesystems.com/doc/rtslib-gpl/pdf/rtslib-API-reference.pdf PDF]] to the {{Target}}.
+
{{Main Page Subbox|#cedff2|#a3b0bf|[[Targetcli]]|[[targetcli]] provides the fabric agnostic single-node management shell for {{Target}}. targetcli aggregates and presents all SAN functionality via the RTSlib library and API [[http://www.risingtidesystems.com/doc/rtslib-gpl/html HTML]][[http://www.risingtidesystems.com/doc/rtslib-gpl/pdf/rtslib-API-reference.pdf PDF]] to {{Target}}.
}}
}}
<!-- {{OS}} -->
<!-- {{OS}} -->
-
{{Main Page Subbox|#cedff2|#a3b0bf|[[{{OS}}]]|[[{{OS}}]] integrates the {{Target}} and [[targetcli]] into a single-node Unified Storage operating system ({{OS Admin Manual}}). {{OS}} supports [[VMware]] Ready certification, including [[VMware vSphere]]&trade;&nbsp;5.
+
{{Main Page Subbox|#cedff2|#a3b0bf|[[{{OS}}]]|[[{{OS}}]] integrates {{Target}} and [[targetcli]] into a single-node Unified Storage operating system ({{OS Admin Manual}}). {{OS}} supports [[VMware]] Ready certification, including [[VMware vSphere]]&trade;&nbsp;5.
An [[{{OS}}#Subscriptions|{{OS}}]] subscription provides access to additional {{OS}} packages and update services.
An [[{{OS}}#Subscriptions|{{OS}}]] subscription provides access to additional {{OS}} packages and update services.
Line 121: Line 121:
<!-- High availability -->
<!-- High availability -->
-
{{Main Page Subbox|#cedff2|#a3b0bf|[[Network RAID1|High availability]] and clustering|The {{Target}} is designed from gound up to support highly available and cluster storage:
+
{{Main Page Subbox|#cedff2|#a3b0bf|[[Network RAID1|High availability]] and clustering|{{Target}} is designed from ground up to support highly available and cluster storage:
* Deeply embedded high availability ([[Network RAID1]])
* Deeply embedded high availability ([[Network RAID1]])
* Scale-out clusters and disaster recovery solutions
* Scale-out clusters and disaster recovery solutions

Revision as of 05:16, 29 September 2013

Welcome to Linux-IO,
the generic LIO wiki.
100 articles, 90,256,850 pageviews

Summary

LinuxIO (Linux-IO) is the standard open-source SCSI target for shared data storage in Linux. It supports all prevalent storage fabrics, including Fibre Channel (QLogic), FCoE, iEEE 1394, iSCSI, iSER (Mellanox InfiniBand), SRP (Mellanox InfiniBand), USB, vHost, etc.

The advanced feature set of the LIO has made it the SCSI target of choice for many storage array vendors, for instance allowing them to achieve VMware® Ready certifications. Native support for LinuxIO in QEMU/KVM, libvirt, and OpenStack™ (setup, code) makes it an attractive storage option for cloud deployments.

LinuxIO includes targetcli, a management shell and API with a single namespace for all storage objects.

LinuxIO and targetcli are developed by Datera, Inc., a data storage systems and software company located in Mountain View in the Silicon Valley.

Target

Frontend

Fabric Modules implement the protocols to transmit data over divers fabrics, providing fabric technology independence.

Backend

Backstores implement the methods to access data on devices, providing storage media independence.

Architecture

LinuxIO engine implements the generic SCSI semantics.

Advanced SCSI feature set

Compatibility and certifications

LinuxIO works with Initiators of the following operating systems:
  • Microsoft: Windows® Server 2008/R2/2012 and Windows® XP/Vista/7/8
  • Apple Mac OS X (via third-party initiator)
  • Linux: RHEL 4/5/6, SLES 10.3/11, CentOS, Debian, Fedora, openSUSE, Ubuntu
  • Unix: Solaris 10, OpenSolaris, HP-UX
  • VMs: vSphere™ 5, Red Hat KVM, Microsoft Hyper-V, Oracle xVM/VirtualBox, Xen

LinuxIO enables VMware Ready certifications (incl. vSphere™ 5). It also passes the Microsoft Windows® Server 2008 / R2 Failover Cluster compatibility test suites.

Targetcli

targetcli provides the fabric agnostic single-node management shell for LinuxIO. targetcli aggregates and presents all SAN functionality via the RTSlib library and API [HTML][PDF] to LinuxIO.

LIO

LIO integrates LinuxIO and targetcli into a single-node Unified Storage operating system (Template:OS Admin Manual). LIO supports VMware Ready certification, including VMware vSphere™ 5.

An LIO subscription provides access to additional LIO packages and update services.

High availability and clustering

LinuxIO is designed from ground up to support highly available and cluster storage:
  • Deeply embedded high availability (Network RAID1)
  • Scale-out clusters and disaster recovery solutions

Initiator

The Core-iSCSI Initiator is a high-end iSCSI Initiator that resolves a number of known issues with the Open-iSCSI standard Linux Initiator.

Core-iSCSI is available on Linux and Windows®, and it has been ported to a wide range of platforms and devices, including:

Datera, Inc. ported OCFS2 onto the Nokia Internet Tablets on top of the Core-iSCSI Initiator.

RTS Director

RTS Director is a distributed, highly-available cluster management framework. It comprises a shell, active library and API. The active library and API provide an extensible platform with a unified namespace to manage complex functionality, such as high-availability and cluster striping. The shell offers location-transparent access to all objects in the SAN. New functionality and devices can be added via plugin-modules.

RTS Director provides zero configuration. It is based on a symmetrically distributed architecture - there is no single point of failure, no cluster controller, no central database, etc. Nodes running the RTS Director automatically discover and join the cluster when coming up (demo video).

See also

Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox
Google AdSense