Tuesday, October 13, 2020

Yum install from a local folder

cd <to your desired rpms location>

yum --disablerepo=* localinstall *.rpm

Linux File accessing via browser

Do following steps

1. cd /var/www/html

2. ln -s desired_directory linkName

3. chmod 755 Desired_Directory 

4. in /etc/httpd/conf/httpd.conf set as "UserDir disable"

How to change the hostname of machine

Follow either of following three steps

1. on shell prompt use command  hostname newname

2. in /etc/hosts mark entry against 127.0.0.0 newname

3. in /etc/sysconfig/network against HOSTNAME=newname 

the last one is permanent solution.


Monday, May 11, 2020

MAXIMUM POSSIBLE PACKET RATE ON A 10G LINK


One of the smallest packets commonly seen on networks is a TCP ACK. This has a 20 byte IP header and a 20 byte TCP header, adding up to 40 bytes. Because this is smaller than ethernet's minimum payload size of 46 bytes, it is automatically padded prior to transmission to bring it up to size. It is then wrapped with a 14 byte header and 4 byte CRC, to give the minimum ethernet frame size of 64 bytes.
When transmitted, each packet must also be preceded by a 7-byte preamble and 1-byte start-of-frame delimiter, and must be followed by an inter-frame gap of at least 12 bytes. This makes the smallest transmission in ethernet effectively 84 bytes.
We now have sufficient information to calculate the maximum packet rate on a 10G link:
10Gbps / (84 bytes * 8 bits) = 14.88Mpps
But the data rate that will be reported by most tools is significantly less, due to not counting the overheads:
14.88Mpps * 64 bytes * 8 bits = 7.62Gbps
Finally, we can work out the IP transmission efficiency (as opposed to ethernet efficiency) - it's pretty poor with such small packets:
14.88Mpps * 40 bytes * 8 bits / 10Gbps = 47.6%

Sunday, April 16, 2017

ECOMP : Why Opensource ?

Have a look on following points:

1.  It did not make sense to rely on a proprietary solution for an end-to-end MANO (Management, Automation and Network Orchestration) solution, which would have been the traditional approach.

2. AT&T realized that ‘cracking’ VNF orchestration and automating lifecycle processes would be crucial to reaching its target of virtualizing 75% of its network by 2020.  Its decision to recruit telecom operators and blue chip vendors as founding members of ECOMP established the collaborative direction that would eventually see the project converted to an entirely open-source initiative.

3. Having direct access to the VNF code reduces the operators’ dependence on vendors. This has obvious time and cost implications, and will also transform the operators-vendor’s relationship to become much more collaborative in nature.

Thursday, June 16, 2016

chroot jail : First step for jail

The jail mechanism is an implementation of operating system-level virtualization that allows administrators to partition a computer system into several independent mini-systems called jails.

A chroot on Unix operating systems is an operation that changes the apparent root directory for the current running process and its children. A program that is run in such a modified environment cannot name (and therefore normally cannot access) files outside the designated directory tree. The modified environment is called a chroot jail.

A chroot environment can be used to create and host a separate virtualized copy of the software system. This can be useful for Testing and development, Dependency control, Compatibility, Recovery, Privilege separation 


The actual jail development consisted of five parts:

1. Making sure you don’t escape the chroot/jail
2. Restricting process visibility
3. Deciding what “root” can and cannot do in a jail
4. Teach certain device drivers about jails
5. Giving each jail it’s own IP number.


Steps to create a chroot jail for another flavor of linux
Step 1. Have a Linux hosted machine.

Step 2. Copy Any other flavor of linex(e.g. Ubuntu)
[root@localhost ~]# cd /home/ajay/ws/chroot/ubuntu/rootfs/
a.txt      boot/      etc/       lib/       media/     opt/       root/      selinux/   sys/       tmp/       var/
bin/       dev/       home/      lib64/     mnt/       proc/      sbin/      srv/       test.test  usr/

Step 3. Use chroot command. It will chnage the promt details as shown below with "/".
[root@localhost chroot]# chroot /home/ajay/ws/chroot/ubuntu/rootfs/
groups: cannot find name for group ID 490
root@localhost:/# 

Step 4. Mount the devices.
if i run the command without mounting
root@localhost:/# ps -ef | grep vim
Cannot find /proc/version - is /proc mounted?

So mount
root@localhost:/# mount -t proc proc /proc/
root@localhost:/# mount -t sysfs sys /sys/
root@localhost:/# mount -o bind /dev /dev/

Now chroot environment to use ubuntu libs. Enjoy

Exit from chroot jail 
root@localhost:/# umount /proc
root@localhost:/# umount /sys
root@localhost:/# umount /dev
root@localhost:/# exit

Caution: It should only be used for processes that don't run as root, as root users can break out of the jail very easily. Your all files have complete access from main root. So all your files are visual and modifiable from outside.

Important terms should be known to improvise the solution
Operating-system-level virtualization is a server virtualization method in which the kernel of an operating system allows the existence of multiple isolated user-space instances, instead of just one. Such instances, which are sometimes called containers, software containers,[1] virtualization engines (VEs) or jails (FreeBSD jail or chroot jail), may look and feel like a real server from the point of view of its owners and users.

On Unix-like operating systems, this technology can be seen as an advanced implementation of the standard chroot mechanism. In addition to isolation mechanisms, the kernel often provides resource-management features to limit the impact of one container's activities on other containers.

Operating-system-level virtualization is not as flexible as other virtualization approaches since it cannot host a guest operating system different from the host one, or a different guest kernel. For example, with Linux, different distributions are fine, but other operating systems such as Windows cannot be hosted.

The storage hypervisor, a centrally-managed supervisory software program, provides a comprehensive set of storage control and monitoring functions that operate as a transparent virtual layer across consolidated disk pools to improve their availability, speed and utilization.

One more example:
[root@localhost chroot]# mkdir linx

[root@localhost chroot]# cd linx

[root@localhost linx]# mkdir bin lib dev tmp

[root@localhost linx]# chmod a=rwx tmp   [making accessible to every user and process]
[root@localhost linx]# ls -lrt
total 16
drwxrwxrwx 2 root root 4096 Jun 16 16:49 tmp
drwxr-xr-x 2 root root 4096 Jun 16 16:49 lib
drwxr-xr-x 2 root root 4096 Jun 16 16:49 dev
drwxr-xr-x 2 root root 4096 Jun 16 16:52 bin

Will restrict this jail to limited rights.

[root@localhost linx]#  cp /bin/bash /bin/ls bin

[root@localhost linx]# ldd bin/*
bin/bash:
        linux-vdso.so.1 =>  (0x00007fff0f9c8000)
        libtinfo.so.5 => /lib64/libtinfo.so.5 (0x0000003b28a00000)
        libdl.so.2 => /lib64/libdl.so.2 (0x0000003b1b200000)
        libc.so.6 => /lib64/libc.so.6 (0x0000003b1ae00000)
        /lib64/ld-linux-x86-64.so.2 (0x0000003b1a600000)
bin/ls:
        linux-vdso.so.1 =>  (0x00007fffe7ee3000)
        libselinux.so.1 => /lib64/libselinux.so.1 (0x0000003b1c600000)
        librt.so.1 => /lib64/librt.so.1 (0x0000003b1be00000)
        libcap.so.2 => /lib64/libcap.so.2 (0x0000003b1de00000)
        libacl.so.1 => /lib64/libacl.so.1 (0x0000003b2a200000)
        libc.so.6 => /lib64/libc.so.6 (0x0000003b1ae00000)
        libdl.so.2 => /lib64/libdl.so.2 (0x0000003b1b200000)
        /lib64/ld-linux-x86-64.so.2 (0x0000003b1a600000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003b1b600000)
        libattr.so.1 => /lib64/libattr.so.1 (0x0000003b29600000)

[root@localhost linx]#  cp /lib64/libtinfo.so.5 /lib64/libdl.so.2 /lib64/libc.so.6 /lib64/ld-linux-x86-64.so.2 /lib64/libselinux.so.1 /lib64/librt.so.1 /lib64/libcap.so.2 /lib64/libacl.so.1 /lib64/libpthread.so.0 /lib64/libattr.so.1 lib/

Populating the jail with two basic devices
[root@localhost linx]#  ls -l /dev/null /dev/zero
crw-rw-rw- 1 root root 1, 3 Jun 10 15:01 /dev/null
crw-rw-rw- 1 root root 1, 5 Jun 10 15:01 /dev/zero

[root@localhost linx]# mknod dev/null c 1 3
[root@localhost linx]# mknod dev/zero c 1 5

[root@localhost linx]# ls -lrt dev/*
crw-r--r-- 1 root root 1, 3 Jun 16 17:01 dev/null
crw-r--r-- 1 root root 1, 5 Jun 16 17:01 dev/zero

[root@localhost linx]# chmod a=rw dev/null dev/zero

[root@localhost linx]# ls -lrt dev/*
crw-rw-rw- 1 root root 1, 3 Jun 16 17:01 dev/null
crw-rw-rw- 1 root root 1, 5 Jun 16 17:01 dev/zero

[root@localhost linx]# chroot /home/ajay/ws/chroot/linx
chroot: failed to run command `/bin/bash': No such file or directory

Error occurred so check the ldd command output you can see that all libs are in lib64 folder. so moving all libs from lib folder to lib64.

[root@localhost linx]# mv lib lib64

[root@localhost linx]# chroot /home/ajay/ws/chroot/linx
bash-4.1# pwd
/

bash-4.1#

Welcome to chroot jail. Now create customize applications over it.

Wednesday, June 15, 2016

virtualization : From college to corporate

"imperfect virtualization can and often is preferable to perfect virtualization"

Linux virtualization can be used for isolating specific apps, programming code or even an operating system itself, as well as for security and performance testing purposes.

The evolution of virtualization greatly revolves around this piece of very important software termed as hypervisor.
Hypervisor: A software layer or subsystem that controls hardware and provides guest operating systems with access to underlying hardware. The hypervisor allows multiple operating systems, called guests, to run on the same physical system by offering virtualized hardware to the guest operating system

Bare-metal Hypervisor. This type of hypervisor (pictured at the beginning of the article) is deployed as a bare-metal installation. This means that the first thing to be installed on a server as the operating system will be the hypervisor. The benefit of this software is that the hypervisor will communicate directly with the underlying physical server hardware. Those resources are then paravirtualized and delivered to the running VMs. This is the preferred method for many production systems.

Hosted Hypervisor. This model (shown below) is also known as a hosted hypervisor. The software is not installed onto the bare-metal, but instead is loaded on top of an already live operating system. For example, a server running Windows Server 2008R2 can have VMware Workstation 8 installed on top of that OS. Although there is an extra hop for the resources to take when they pass through to the VM – the latency is minimal and with today’s modern software enhancements, the hypervisor can still perform optimally.

Important terms to understand virtualization.

Full virtualization: The guest operating system and any applications on the guest virtual machine are unaware of their virtualized environment and run normally. Hardware-assisted virtualization is the technique used for full virtualization with KVM (Kernel-based Virtual Machine) in Red Hat Enterprise Linux.

Para-virtualization: After the guest VM is installed on top of the hypervisor, there usually is a set of tools which are installed into the guest VM. These tools provide a set of operations and drivers for the guest VM to run more optimally. For example, although natively installed drivers for a NIC will work, paravirtualized NIC drivers will communicate with the underlying physical layer much more efficiently. Furthermore, advanced networking configurations become a reality when paravirtualized NIC drivers are deployed.

Software virtualization (or emulation): Software virtualization uses slower binary translation and other emulation techniques to run unmodified operating systems.

Migration: Migration describes the process of moving a guest virtual machine from one host to another. This is possible because the virtual machines are running in a virtualized environment instead of directly on the hardware. There are two ways to migrate a virtual machine: live and offline.
eg. Load balancing, Upgrading or making changes to the host, Energy saving, Geographic migration

Key : KVM, VMware ESX, and Hyper-V.