Elasticity to the Datanode using LVM

Karthik Avula
4 min readDec 5, 2020

Task 7.1

đź”…Integrating LVM with Hadoop and providing Elasticity to DataNode Storage
đź”…Increase or Decrease the Size of Static Partition in Linux.
đź”…Automating LVM Partition using Python-Script.

What is LVM ?

LVM is a tool for logical volume management which includes allocating disks, striping, mirroring and resizing logical volumes.

On small systems (like a desktop), instead of having to estimate at installation time how big a partition might need to be, LVM allows filesystems to be easily resized as needed. (Basically it is called as Elasticity)

Using LVM is simple, you need to have multiple hard disks and and its all depends on the creation of Physical Volumes. And then all these physical volumes are merged (Volume Group) and then these volume can be used as a hard disk in which we can have multiple physical volumes. Best part of a Volume group(think like it is our new hard disk) is that it has the elasticity which means that as our convenient we can increase or decrease our storage capacity.

Lets first attach a new hard disk to my VM.

Attaching new storage

Our first need is to create a physical volume. For this we have a command “pvcreate” and then give our volume to be merged to a new hard disk(Volume group). We can give any number of physical volumes. Here in my case I am using my /dev/sdb and /dev/sdc as my physical volumes.

pvcreate <your diskname>
physical volume creation

Yay ! the process of creating our Physical Volumes is done. Now we need to merge these two physical volumes and need to create a Volume Group(it might be a new hard disk). For these we have a command “vgcreate” and give it a name and then give the all the physical volumes you have !

vgcreate <Your_VG_Name> <PhysicalVolume1> <PhysicalVolume2>...<PhysicalVolumeN>
Volume group creation

After creating our volume group, we need to give it a elasticity functionality for these we need to make it a logical volume. Now the process of creating a Logical Volume come into play. For this we need to use the command “lvcreate” and the size you need from the volume group and give it a name and then include the volume group you are using.

lvcreate --size <Size> --name <NameForYourLV> <VolumegroupUsing>
creating Logical Volume

After the process of creating LV, assume the Logical Volume as a new hard disk with given size (you already given size when creating) you need to format it with a particular file system so than it will create a new inode tables for the files you are using inside the logical volume. In my case, I am using ext4 as a format type.

mkfs.<format_type> /dev/<YourVolumeGroup>/<yourLVname>

After the process of formatting your LV, you need to mount to a particular folder to use conveniently. In my task of Hadoop, I need to share a folder to master and the name I have given is /dn1 so I am using it as my mount point and my task of elasticity can be done using the same.

mounting to /dn1

Now in any case if we want extra storage we can ask the storage from our physical volumes or volume group. For example if you need extra 2GB of memory we can use command lvextend and it will get memory from VG. We just need to ask our lvextend command how much size we need. If we want to add 2GB use +2G vice versa.

lvextend --size <SizeYouNeed> <YourLVlocation>e

Since we have extended our partition, we again need to format it so that we can use it. But here, we dont need to use mkfs again and again it will create a new inode table everytime you create, which means everytime your table will be deleted and files will be misplaced and cant be accessed correctly. You just need to format the newly extended partition for than we have a command “resize2fs” and it will keep the existing inode table and appends the new table/file.

resize2fs /dev/arthlvm/mylv1

Since we know that we need to use these commands to increase/decrease our parition size, we can write our python script like these.

Thank for reading my blog ! I hope you have liked it. Next time I will come with an another exciting blog.

This task I have done is a part of my journey in ARTH — The School of technologies, guided by World Record Holder Mr. Vimal Daga sir !

--

--

Karthik Avula

CS Student. Skillset : Linux, Bash Shell Scripting, Python, AWS, Hadoop, Ansible, Docker, Kuberntes, Networking and Troubleshooting, OS Concepts.