/usr/home

just some random thoughts

cleaning up your disks on solaris

I’m sure most of you know this, but for those who don’t:

If you have multiple Boot Environments (BE), e.g. created by an software installation or upgrade, than it isn’t possible to remove a few snapshots. This is because a BE is simply a clone of a snapshot (snapshots cannot be deleted as long as a clone depends on it).

If you try to remove such a snapshot, the following will happen:

zfs destroy rpool/ROOT/openindiana-6@2012-10-26-10:25:12  
cannot destroy rpool/ROOT/openindiana-6@2012-10-26-10:25:12: snapshot has dependent clones  
use "-R" to destroy the following datasets:  
rpool/ROOT/openindiana-3

I dont’t know why openindiana-3 depends on openindiana-6@blabla.
So look at yout BE’s:

beadm list  
BE Active Mountpoint Space Policy Created  
openindiana-6 NR / 4,79G static 2012-11-11 14:50  
openindiana-6-backup-1 – – 77,0K static 2012-11-11 15:04

You will notice, that there are several BE’s which aren’t used, like the openindiana-6-backup-1 here. If you haven’t changed something on your system since the last BE was created and everything runs fine just do:

beadm destroy  openindiana-6-backup-1

You will notice that it will complain about snapshots on that BE if you have time-slider running. In this case just remove the snapshots by typing:

zfs list -t snapshot -o name | grep ^rpool | xargs -n 1 zfs destroy

This will remove every snapshot on your rpool. You can change this to your needs an type in the dataset of the BE that you will destroy.

Another way is to type

beadm destroy  -s openindiana-6-backup-1

which will automatically destroy all snapshots depending on this BE.

Hint: you can use “beadm list -s -d” to have a more detailed view on your BE’s.

During this cleanup I saved 3-4GB of data. Now my rpool is small enough to fit on my 16GB SSD in the server.