Tuesday, January 16, 2018

How to download images from Openstack

Use following command to download images from OpenStack glance.

source  overcloudrc 
openstack image save --file file-name.qcow2 image-name-or-id  

if you would like to download and compress at the same time use

source overcloudrc 
openstack image save image-name-or-id | gzip -c > image-name.raw.gz  

For example:

[stack@ospdir ~]$ source overcloudrc 
(admin@overcloud) [stack@ospdir ~]$ openstack image list -c ID -c Name -c "Disk Format" -c Size -f table --long 
+--------------------------------------+----------+-------------+-------------+ 
| ID                                   | Name     | Disk Format |        Size | 
+--------------------------------------+----------+-------------+-------------+ 
| 6c5384f6-cf86-4a41-b9d4-8cafbb9c08fc | CentOS74 | qcow2       |   854851584 | 
| 4d43fb6d-16a3-4c85-8eab-f0088a9f5aaa | Win2k16  | raw         | 42954915840 | 
| 3bcdeb04-9170-4f2c-8468-8797477b6c02 | cirros   | qcow2       |    13267968 | 
+--------------------------------------+----------+-------------+-------------+ 
(admin@overcloud) [stack@ospdir ~]$ openstack image save Win2K16 | gzip -c > Win2K16.raw.gz 
(admin@overcloud) [stack@ospdir ~]$ du -b Win2k16.raw.gz  9151990788    Win2k16.raw.gz 
(admin@overcloud) [stack@ospdir ~]$ openstack image save --file  cirros.qcow2 cirros 
(admin@overcloud) [stack@ospdir ~]$ du -b cirros.qcow2  13267968    Win2k16.raw.gz  

Thursday, November 5, 2015

CYBOSOL Labs - Drupal on Kubernetes (I)

We @ CYBOSOL have always been strong proponents of open source, with a belief that knowledge should be shared. We have decided to put together this series of writeups to share our experience and learning, just our way of saying a little thank you to the Open Source community for all it's benefits & goodness that we have been enjoying through CYBOSOL's 9 Year  journey.
So as a start, the R&D team of CYBOSOL, is embarking on a project to find out the possibilities of scaling Drupal massively without worrying about the traditional bottlenecks of databases and file store.

Why Drupal ?

The answer to the question is very simple, Drupal is one of the simplest, yet powerful and widely used content management systems in the world. But most of the time users get stuck while trying to scale according to today's demands, most often amplified by the random vulnerability and rogue scans trying to bring down the servers.

Why we chose the tools that we chose?

 The primary goals of this project were identified as
  • Maintain and Support an unmodified Drupal Core.
  • Retain all of Drupal's core functionalities.
  • Scale on-demand and tackle database & static files bottlenecks.
  • Cost-effective model ready-to-deploy on popular Paas platforms.
So in order to achieve the above goals we chose the following tools to be part of our journey based on our experience and availability of resources.
  • OpenStack - Nova, Swift & Neutron 

 OpenStack was chosen, just because it was readily available in our Lab and has all necessary services including Nova, Swift and Neutron powering away our other projects. All we have to do is to create another project and fire-up the VMs and ObjectStore. We will probably use Ansible to automate the process though. 
Apart from the above reason, most importantly OpenStack Swift makes a good choice to keep files outside of the stateless web server pods, which will be moving around the cluster to keep up with varying demands. This also means that we  don't have to worry about shared folders and long running pods & containers with stateful content.
  • CoreOS

 CoreOS Beta version 835.1.0 (Beta as of this writing & remember we are still in R&D phase, so no issues in using it) is the Operating System of choice as it has all the necessary tools - etcd2, fleet, Docker & kubelet - built in and ready to start a kubernetes cluster. So why look for another when things are ready for you in a single plate :). We have already written couple of cloud-config YAML to speed up the build process. 
  • Kubernetes

We were in a dilemma as to which orchestration and scheduler to use. Recently we came across this nice like post from Adrian Mouat and we were convinced that Kubernetes would be the best choice for now than swarm or fleet or even Mesos.
  • Docker (Of-course)

Yes of-course we will be using Docker containers as it is the container of choice in Kubernetes. Please do not get us wrong, we are not going to put everything inside one container and going to say we are done. Instead we will follow the Kube way of making Pods and Service endpoints.
  • YouTube Vitess 

 We are planning to use YouTube Vitess, the MySQL based DB powering the metadata DB engine behind YouTube's massive meta data. We are not yet sure if it would fit our needs, but from the descriptions and notes, we think it will be a good bet. 
  • Memcached or Redis 

 The key-value pair caching engine to off load DB load. We may or may not use it as Vitess provides majority of the off-loading functionalities on its-own. 
  • Apache 2.4 + PHP5

 Since the project objective is to keep things simple for Drupal deployment, we will use Apache 2.4, with PHP5 module. As you know Drupal is made while keeping Apache in mind specially those nifty htaccess rules that comes shipped along with Drupal.
  • Varnish Cache

Just like icing on-top of a cake, Varnish-cache would be the simple and elegant caching engine to complement Drupal and to take the brunt of the web traffic.
  • Nginx - In case if we have to off-load HTTPS

 If time permits we would be implementing Nginx pods for off-loading the SSL traffic. Idea is to put up Nginx next to Varnish and Nginx would transform and transfer all incoming SSL traffic to the Varnish cache.
Though one could argue that to avoid complexity why don't we just use Nginx for both caching and SSL off-loading. Well, yes we could. But  we would leave that decision to the reader. 
So that's the brief about the project. we are planning to start the project by next week. Our plan is to release the full set of how-to by 20th November. Please stay tuned!! Comments and suggestions are welcome.
CYBOSOL R&D Team.

Friday, August 30, 2013

Puppet module for installing and configuring pgpool

Here is a puppet module that I have written for installing and configuring pgpool. Feel free to modify to suite your needs.

Ansible module for AWS EC2 ENI

Here is an Ansible module that I have written (reference ec2_vol by Lester Wade) for creating and attaching an AWS EC2 ENI to an instance. Attaching to an instance is optional.


Feel free to modify to your own need.

Monday, February 4, 2013

Script I used for suspending wireless - bluetooth and wifi - on Sony Vaio VPCS116FA

Here is a script that I used for disabling / enabling wireless on my laptop during suspend/wakeup

[root]# cat > /etc/pm/sleep.d/wireless <<EOF

#!/bin/bash

. /usr/lib/pm-utils/functions

case "$1" in
    hibernate|suspend)
                rfkill block all
                ;;
    thaw|resume)
                rfkill unblock all
                ;;
    *)
                ;;
esac

exit

EOF
[root]# chmod 755 /etc/pm/sleep.d/wireless

Thursday, January 5, 2012

multipath.conf - Qlogic qla2xxx, EMC Clarion and RHEL 5.7 64bit

Below is the configuration that I used for a server with Qlogic HBA card (qla2xxx driver) and EMC Clarion SAN storage.


defaults {
        udev_dir                /dev
        polling_interval        10
        selector                "round-robin 0"
        path_grouping_policy    failover
        getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
        prio_callout            "/bin/true"
        path_checker            tur
        rr_min_io               100
        rr_weight               uniform
        failback                immediate
        no_path_retry           12
        user_friendly_names     yes
}
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^hd[a-z]"
        devnode "^cciss!c[0-9]d[0-9]*"
}
devices {
       device {
               vendor                  "DGC"
               product                 ".*"
               product_blacklist       "LUNZ"
               getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
               prio_callout            "/sbin/mpath_prio_alua /dev/%n"
               features                "1 queue_if_no_path"
               hardware_handler        "1 alua"
               path_grouping_policy    group_by_prio
               failback                immediate
               rr_weight               uniform
               no_path_retry           60
               rr_min_io               1000
               path_checker            emc_clariion
       }

}

I was getting "Buffer I/O error on device" when using "emc" handler. Hope it may help some one.

Friday, September 30, 2011

Tesla C2075 not detected by Nvidia control panel after installing Driver 275.89

Me and my friend brought two Tesla card for one of our HPC project. To my surprise after installing the new driver (Win7 64bit) for the Tesla C2075 compute board, Nvidia Control Panel won't start and gives out an error "You are not currently using any Display attached to an Nvidia GPU". But the windows device manager was listing it properly as a working card and there was no error in the system logs. It was really puzzling. 

After some time fiddling with the settings and Googling I found this Autodesk forum, which says that by default the new Win 7 driver sets TCC bit ON. So I started a command line in administrative mode and navigated to C:\Program Files\NVIDIA Corporation\NVSMI folder and issued the following command to disable the settings.

nvidia-smi.exe -dm 0

A reboot was required. And guess what the control panel started working as it was supposed to. Woooh ! it was a great relief coz I thought the board might have gone bad.