Professional Documents
Culture Documents
Page 1 of 12
Quick Reference
HOME
OS
SERVER
LEARNINGS
EC DATA
Solaris 10 Zones
In its simple form, a zone is a virtual operating system environment created within a single instance of the Solaris operating system. Efficient
resource utilization is the main goal of this technology.
Solaris 10's zone partitioning technology can be used to create local zones that behave like virtual servers. All local zones are controlled from the
system's global zone. Processes running in a zone are completely isolated from the rest of the system. This isolation prevents processes that are
running in one zone from monitoring or affecting processes that are running in other zones. Note that processes running in a local zone can be
monitored from global zone; but the processes running in a global zone or even in another local zone cannot be monitored from a local zone.
As of now, the upper limit for the number of zones that can be created/run on a system is 8192; of course, depending on the resource
availability, a single system may or may not run all the configured zones effectively.
Global Zone
When we install Solaris 10, a global zone gets installed automatically; and the core operating system runs under global zone. To list all the
configured zones, we can use zoneadm command:
% zoneadm list -v
ID NAME
0 global
STATUS
running
PATH
/
/dev/dsk/c1t1d0s0
29G
22G 7.1G
76%
% ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
eri0: flags=1000843 mtu 1500 index 2
http://quickreference.weebly.com/solaris-10-zone-basics.html
1/23/2014
Page 2 of 12
root
1. Create & configure a new 'sparse root' local zone, with root privileges
% zonecfg -z appserv
appserv: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:appserv> create
zonecfg:appserv> set zonepath=/zones/appserver
zonecfg:appserv> set autoboot=true
zonecfg:appserv> add net
zonecfg:appserv:net> set physical=eri0
zonecfg:appserv:net> set address=192.168.175.126
zonecfg:appserv:net> end
zonecfg:appserv> add fs
zonecfg:appserv:fs> set dir=/repo2
zonecfg:appserv:fs> set special=/dev/dsk/c2t40d1s6
zonecfg:appserv:fs> set raw=/dev/rdsk/c2t40d1s6
zonecfg:appserv:fs> set
zonecfg:appserv:fs> set options noforcedirectio
zonecfg:appserv:fs> end
zonecfg:appserv> add inherit-pkg-dir
zonecfg:appserv:inherit-pkg-dir> set dir=/opt/csw
zonecfg:appserv:inherit-pkg-dir> end
zonecfg:appserv> info
zonepath: /zones/appserver
autoboot: true
pool:
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
inherit-pkg-dir:
dir: /opt/csw
net:
address: 192.168.175.126
physical: eri0
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit
Sparse Root Zone Vs Whole Root Zone
In a Sparse Root Zone, the directories /usr, /sbin, /lib and /platform will be mounted as loopback file systems. That is, although all those
directories appear as normal directories under the sparse root zone, they will be mounted as read-only file systems. Any change to those
http://quickreference.weebly.com/solaris-10-zone-basics.html
1/23/2014
Page 3 of 12
directories in the global zone can be seen from the sparse root zone.
However if you need the ability to write into any of those directories listed above, you may need to configure a Whole Root Zone. For example,
softwares like ClearCase need write permissions to /usr directory. In that case configuring a Whole Root Zone is the way to go. The steps for
creating and configuring a new 'Whole Root' local zone are as follows:
% zonecfg -z appserv
appserv: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:appserv> create
zonecfg:appserv> set zonepath=/zones/appserver
zonecfg:appserv> set autoboot=true
zonecfg:appserv> add net
zonecfg:appserv:net> set physical=eri0
zonecfg:appserv:net> set address=192.168.175.126
zonecfg:appserv:net> end
zonecfg:appserv> add inherit-pkg-dir
zonecfg:appserv:inherit-pkg-dir> set dir=/opt/csw
zonecfg:appserv:inherit-pkg-dir> end
zonecfg:appserv> remove inherit-pkg-dir dir=/usr
zonecfg:appserv> remove inherit-pkg-dir dir=/sbin
zonecfg:appserv> remove inherit-pkg-dir dir=/lib
zonecfg:appserv> remove inherit-pkg-dir dir=/platform
zonecfg:appserv> info
zonepath: /zones/appserver
autoboot: true
pool:
inherit-pkg-dir:
dir: /opt/csw
net:
address: 192.168.175.126
physical: eri0
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit
Brief explanation of the properties that I added:
\* zonepath=/zones/appserver
Local zone's root directory, relative to global zone's root directory. ie., local zone will have all the bin, lib, usr, dev, net, etc, var, opt etc.,
directories physically under /zones/appserver directory
\* autoboot=true
boot this zone automatically when the global zone is booted
\* physical=eri0
eri0 card is used for the physical interface
\* address=192.168.175.126
192.168.175.126 is the IP address. It must have all necessary DNS entries
[Added 08/25/08] The whole add fs section adds the file system to the zone. In this example, the file system that is being exported to the zone
is an existing UFS file system.
\* set dir=/repo2
/repo2 is the mount point in the local zone
\* set special=/dev/dsk/c2t40d1s6 set raw=/dev/rdsk/c2t40d1s6
Grant access to the block (/dev/dsk/c2t40d1s6) and raw (/dev/rdsk/c2t40d1s6) devices so the file system can be mounted in the non-global
http://quickreference.weebly.com/solaris-10-zone-basics.html
1/23/2014
Page 4 of 12
zone. Make sure the block device is not mounted anywhere right before installing the non-global zone. Otherwise, the zone installation may fail
with ERROR: file system check </usr/lib/fs/ufs/fsck> of </dev/rdsk/c2t40d1s6> failed: exit status <33>: run fsck manually. In that case,
unmount the file system that is being exported, uninstall the partially installed zone (zoneadm -z <zone> uninstall) then install the zone from the
scratch (no need to re-configure the zone, just do a re-install).
\* set
The file system is of type UFS
\* set options noforcedirectio
Mount the file system with the option noforcedirectio[/Added 08/25/08]
\* dir=/opt/csw
read-only path, will be lofs'd (loop back mounted) from global zone. Note: it works for sparse root zone only -- whole root zone cannot have any
shared file systems
zonecfg commands verify and commit, verifies and commits the zone configuration for the zone, respectively. Note that it is not necessary to
commit the zone configuration; it will be done automatically when we exit from zonecfg tool. info displays information about the current
configuration
Check the state of the newly created/configured zone
% zoneadm list -cv
ID NAME
0 global
- appserv
STATUS
running
PATH
/
configured
/zones/appserver
Next step is to install the configured zone. It takes a while to install the necessary packages
% zoneadm -z appserv install
/zones must not be group writable.
could not verify zonepath /zones/appserver because of the above errors.
zoneadm: zone appserv failed to verify
% ls -ld /zones
drwxrwxr-x 3 root
root
Since /zones must not be group writable, let's change the mode to 700.
root
http://quickreference.weebly.com/solaris-10-zone-basics.html
1/23/2014
Page 5 of 12
STATUS
running
installed
PATH
/
/zones/appserver
1. Boot up the appserv zone. Let's note down the ifconfig output to see how it changes after the local zone boots up. Also observe that there
is no answer from the server yet, since it is not up
% ping 192.168.175.126
no answer from 192.168.175.126
% ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
eri0: flags=1000843 mtu 1500 index 2
inet 192.168.74.217 netmask fffffe00 broadcast 192.168.75.255
ether 0:3:ba:2d:0:84
% zoneadm -z appserv boot
zoneadm: zone 'appserv': WARNING: eri0:1: no matching subnet found in netmasks(4) for 192.168.175.126;
using default of 255.255.0.0.
% zoneadm list -cv
ID NAME
STATUS
0 global
1 appserv
running
running
PATH
/
/zones/appserver
% ping 192.168.175.126
192.168.175.126 is alive
% ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849 mtu 8232 index 1
zone appserv
inet 127.0.0.1 netmask ff000000
eri0: flags=1000843 mtu 1500 index 2
inet 192.168.74.217 netmask fffffe00 broadcast 192.168.75.255
ether 0:3:ba:2d:0:84
eri0:1: flags=1000843 mtu 1500 index 2
zone appserv
inet 192.168.175.126 netmask ffff0000 broadcast 192.168.255.255
Observe that the zone appserv has it's own virtual instance of lo0, the system's loopback interface and the zone's IP address is also being served
by the eri0 network interface
1. Login to the Zone {console} and performing the internal zone configuration. zlogin utility can be used to enter a zone. The first time we log
in to the console, we get a chance to answer a series of questions for the desired zone configuraton. -C option of zlogin can be used to log
in to the Zone console.
% zlogin -C -e [ appserv
[Connected to zone 'appserv' console]
Select a Language
0. English
1. es
2. fr
Please make a choice (0 - 2), or press h or ? for help: 0
Select a Locale
http://quickreference.weebly.com/solaris-10-zone-basics.html
1/23/2014
Page 6 of 12
%
That is all there is in the creation of a local zone. Now simply login to the newly created zone, just like connecting to any other system in the
network.
Mounting file systems in a non-global zone
Sometimes it might be necessary to export file systems or create new file systems when the zone is already running. This section's focus is on
exporting block devices and the raw devices in such situations i.e., when the local zone is already configured.
Exporting the Raw Device(s) to a non-global zone
If the file system does not exist on the device, raw devices can be exported as they are, so the file system can be created inside the non-global
zone using the normal newfs command.
The following example shows how to export the raw device to a non-global zone when the zone is already configured.
# zonecfg -z appserv
zonecfg:appserv> add device
zonecfg:appserv:device> set match=/dev/rdsk/c5t0d0s6
zonecfg:appserv:device> end
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit
http://quickreference.weebly.com/solaris-10-zone-basics.html
1/23/2014
Page 7 of 12
sys
Now that the raw device is accessible within the non-global zone, we can use the regular Solaris commands to create any file system like UFS.
eg.,
# newfs -v c5t0d0s6
newfs: construct a new file system /dev/rdsk/c5t0d0s6: (y/n)? y
mkfs -F ufs /dev/rdsk/c5t0d0s6 1140260864 -1 -1 8192 1024 251 1 120 8192 t 0 -1 8 128 n
Warning: 4096 sector(s) in last cylinder unallocated
/dev/rdsk/c5t0d0s6: 1140260864 sectors in 185590 cylinders of 48 tracks, 128 sectors
556768.0MB in 11600 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
...............................................................................
...............................................................................
.........................................................................
super-block backups for last 10 cylinder groups at:
1139344160, 1139442592, 1139541024, 1139639456, 1139737888, 1139836320,
1139934752, 1140033184, 1140131616, 1140230048
Exporting the Block Device(s) to a non-global zone
If the file system exists on the device, block devices can be exported as they are, so the file system can be mounted inside the non-global zone
using the normal Solaris command, mount.
The following example shows how to export the block device to a non-global zone when the zone is already configured.
# zonecfg -z appserv
zonecfg:appserv> add device
zonecfg:appserv:device> set match=/dev/dsk/c5t0d0s6
zonecfg:appserv:device> end
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit
In this example /dev/dsk/c5t0d0s6 is being exported.
After the zonecfg step, reboot the non-global zone to make the block device visible inside the non-global zone. After the reboot, check the
existence of the block device; and mount the file system within the non-global zone.
# hostname
v440appserv
# ls -l /dev/dsk/c5t0d0s6
brw-r----- 1 root
sys
# fstyp /dev/dsk/c5t0d0s6
ufs
# mount /dev/dsk/c5t0d0s6 /mnt
# df -h /mnt
Filesystem
/dev/dsk/c5t0d0s6
535G
64M 530G
1%
/mnt
http://quickreference.weebly.com/solaris-10-zone-basics.html
1/23/2014
Page 8 of 12
Mounting a file system from the global zone into the non-global zone
Sometimes it is desirable to have the flexibility of mounting a file system in the global zone or non-global zone on-demand. In such situations,
rather than exporting the file systems or block devices into the non-global zone, create the file system in the global zone and mount the file
system directly from the global zone into the non-global zone. Make sure to unmount that file system in the global zone if mounted, before
attempting to mount it in the non-global zone.
eg.,
In the non-global zone:
# mkdir /repo1
In the global zone:
# df -h /repo1
/dev/dsk/c2t40d0s6
134G
64M 133G
1%
/repo1
# umount /repo1
# ls -ld /zones/appserv/root/repo1
drwxr-xr-x 2 root
root
64M 133G
1%
/repo1
To unmount the file system from the non-global zone, run the following command from the global zone.
# umount /zones/appserv/root/repo1
Removing the file system from the non-global zone
eg.,
Earlier in the zone creation step, the block device /dev/dsk/c2t40d1s6 was exported and mounted on the mount point /repo2 inside the nonglobal zone. To remove the file system completely from the non-global zone, run the following in the global zone.
# zonecfg -z appserv
zonecfg:appserv> remove fs dir=/repo2
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit
Reboot the non-global zone for this setting to take effect.
Shutting down and booting up the local zones
To bring down the local zone:
% zlogin appserv shutdown -i 0
To boot up the local zone:
% zoneadm -z appserv boot
Just for the sake of completeness, the following steps show how to remove a local zone.
Steps to delete a Local Zone
http://quickreference.weebly.com/solaris-10-zone-basics.html
1/23/2014
Page 9 of 12
STATUS
running
- appserv
PATH
/
installed
/zones/appserver
STATUS
PATH
running
- appserv
configured
/zones/appserver
STATUS
PATH
running
STATUS
running
PATH
/
BRAND
IP
native shared
installed /zones/dbserver
native excl
1. Create the zone root directory for the new zone being created
# mkdir /zones3/oraclebi
# chmod 700 /zones3/oraclebi
# ls -ld /zones3/oraclebi
drwx------ 2 root root
1. Create a new (empty, non-configured) zone in the usual manner with the edited configuration file as an input
# zonecfg -z oraclebi -f /tmp/siebeldb.config.cfg
# zoneadm list -cv
ID NAME
STATUS
0 global
- siebeldb
running
PATH
/
installed /zones/dbserver
BRAND
IP
native shared
native excl
http://quickreference.weebly.com/solaris-10-zone-basics.html
1/23/2014
- oraclebi
configured /zones3/oraclebi
Page 10 of 12
native excl
STATUS
PATH
BRAND
IP
running /
native shared
running /zones/dbserver
native excl
installed /zones3/orabi
native shared
STATUS
PATH
BRAND
IP
running /
native shared
running /zones/dbserver
native excl
configured /zones3/orabi
native shared
1. Move the zonepath for the zone to be migrated from the old host to the new host.
Do the following on the old host:
# cd /zones3
# tar -Ecf orabi.tar orabi
# compress orabi.tar
# sftp newhost
Connecting to newhost...
sftp> cd /zones3
sftp> put orabi.tar.Z
Uploading orabi.tar.Z to /zones3/orabi.tar.Z
sftp> quit
http://quickreference.weebly.com/solaris-10-zone-basics.html
1/23/2014
Page 11 of 12
On the newhost:
# cd /zones3
# uncompress orabi.tar.Z
# tar xf orabi.tar
1. On the new host, configure the zone.
Create the equivalent zone orabi on the new host -- use the zonecfg command with the -a option and the zonepath on the new host. Make any
required adjustments to the configuration and commit the configuration.
# zonecfg -z orabi
orabi: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:orabi> create -a /zones3/orabi
zonecfg:orabi> info
zonename: orabi
zonepath: /zones3/orabi
brand: native
autoboot: false
bootargs:
pool:
limitpriv: all,!sys_suser_compat,!sys_res_config,!sys_net_config,!sys_linkdir,!sys_devices,!sys_config,!proc_zone,!
dtrace_kernel,!sys_ip_config
scheduling-class:
ip-type: shared
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
net:
address: IPaddress
physical: nxge1
defrouter not specified
zonecfg:orabi> set capped-memory
zonecfg:orabi:capped-memory> set physical=8G
zonecfg:orabi:capped-memory> end
zonecfg:orabi> commit
zonecfg:orabi> exit
1. Attach the zone on the new host with a validation check and update the zone to match a host running later versions of the dependent
packages
# ls -ld /zones3
drwxrwxrwx 5 root
root
root
PATH
BRAND
IP
http://quickreference.weebly.com/solaris-10-zone-basics.html
1/23/2014
0 global
- orabi
running
installed /zones3/orabi
Page 12 of 12
native shared
native shared
Note: It is possible to force the attach operation without performing the validation. You can do so with the help of -F option
# zoneadm -z orabi attach -F
Be careful when using this option because it could lead to an incorrect configuration; and an incorrect configuration could result in undefined
behavior
Tip: How to find out whether connected to the primary OS instance or the virtual instance?
If the command zonename returns global, then you are connected to the OS instance that was booted from the physical hardware. If you see
any string other than global, you might have connected to the virtual OS instance.
Alternatively try running prstat -Z or zoneadm list -cv commands. If you see exactly one non-zero Zone ID, it is an indication that you are
connected to a non-global zone.
http://quickreference.weebly.com/solaris-10-zone-basics.html
1/23/2014