Drizzle CI infrastructure getting matured

Its been quite long since I had a discussion about the new CI infrastructure for Drizzle. Well, the repository is getting better and better everyday. A lot of development has been made and its not far from completion. So I just thought I could give my readers a glimpse of what is going on while I finish off with the final few pieces of dev work in the interim.

So, in this discussion, we will be moving on to the stage where the repo had states for the following:

1. Installing Drizzle

2. Ensuring that the drizzled service is running

3. Setting up the SysBench tables for regression testing

4. Installing SysBench package

And below is the output when I run the magical  “salt ‘*’ state.highstate”..

minion-ubuntu:
———-
State: – pkg
Name: drizzle
Function: installed
Result: True
Comment: The following packages were installed/updated: drizzle.
Changes: drizzle: { new : 2011.03.13-0ubuntu5 old : }
———-
State: – service
Name: drizzle
Function: running
Result: True
Comment: The service drizzle is already running
Changes:
———-
State: – file
Name: /home/shar/sysbench_db.sql
Function: managed
Result: True
Comment: File /home/shar/sysbench_db.sql is in the correct state
Changes:
———-
State: – cmd
Name: drizzle -uroot < /home/shar/sysbench_db.sql
Function: run
Result: True
Comment: Command “drizzle -uroot < /home/shar/sysbench_db.sql” run
Changes: pid: 15947
retcode: 0
stderr:
stdout:
———-
State: – pkg
Name: sysbench
Function: installed
Result: True
Comment: The following packages were installed/updated: sysbench.
Changes: sysbench: { new : 0.4.12-1build2 old : }

Soon I will give my readers the flavor of the complete configuration of test nodes with the topping of automation 🙂

Get introduced with SaltStack

One tool that is gonna be used intensively in my work is the SaltStack. To introduce SaltStack to a newbie like me, it is a remote execution and configuration management tool. I know a newbie would find even that as weird. To throw some light on it, the tool is used to manage remote systems (called as minion) from the local machine (master, just another name). So one can execute commands, install packages, start a service, transfer files to and from, etc. This is just a few to mention. SaltStack provides much more. In this post, we will just look into two main portions, salt states and salt pillars, and get some introduction with salt modules and salt cloud.

First things first, we will start with salt states. Salt states is the configuration management side of SaltStack. It is used to set the remote system in the state which we want. Lets say,  if I want a system to have MySQL installed and running, I can do that with state files. That is pretty much about what it does. Well, in reality one would have lots of such requirements, such as, having git installed, a repository cloned in, dependencies for compiling the branch to be installed, compile the code, etc. These states are written in any data serialization format with yaml being the default one, and the file has the extension .SLS.

So internally, salt treats the states mentioned in the .sls file as dictionary, list, integers or strings. So when we have a state defined for the minions, we just have to transfer these state files to the minions, which then configure themselves according to the *rules* specified in the state files. For this purpose, salt runs a simple file server, written in ZeroMQ, in the master through which the state files are served to the minions. So that explains the whole purpose of salt state and a few internals of how it works.

So now a question should arise, what if the state files deal with minion-specific data. A simple example would be, installing packages on redhat and ubuntu platforms. Both are a little different right? Atleast for the name of the package. So salt answers it with Salt pillars. Pillar is another interface provided by SaltStack, which is nothing but, data maintained about the minions by the minions. So one can use these pillar data in the state files, and the minions use the data specific to itself, when parsing the state file. Every minion has some default data, and we can also serve additional data to minion from the master by writing pillar files. These files also have the same construct as of state files, except for what we write inside.

This post focuses mainly on state and pillar interfaces, since those are the two components used intensively. Also, this is just a brief overview for newbies to get started. I hope this would make it easy for them to go through the official docs. So now touching base with salt modules and salt cloud, the modules are the remote execution part of SaltStack. There are modules already provided in the package, and we can write our own custom modules as well. Salt cloud is another tool which is a cloud provisioning tool. One can create VM nodes of specified configs and destroy them once its job is done.

Now that I have introduced this simple yet powerful tool to my readers, I hope they would have a firm hold of it when I script about my actual work. So lots more interesting stuffs coming soon. 🙂

Parallelization and Automation of Drizzle CI infrastructure

This post is all about my work for Google Summer of Code 2013. Drizzle Continuous Integration uses Jenkins CI tool at present. Drizzle had dedicated servers which ran Jenkins server / master with a few slaves. However, the infrastructure did not support parallel running of tests / builds to a scalable extend. So the project aims in bringing parallelism with automation.This can be envisioned as follows.

Drizzle CI does builds using Jenkins. It tests for regression using the available test suites. To bring in parallelism, the CI infrastructure executes  test jobs, which may include a build, a sysbench test, a randgen test, etc., simultaneously. A dedicated system / cloud node is configured for execution of each job in the job queue. Multiple tests + Multiple nodes = Parallelism.

Configuring these nodes is handled in a simple way. It is as simple as writing a config file of specified format, and then issuing a command which configures the nodes accordingly. Cloud nodes + Configuration files = Automated configuration.

To put it in a nut shell, the first work would be to automate the setting up of cloud nodes, and the later half would be to parallelize the test runner. In the following posts, I will be updating about the progress and write about the tools which are involved in the work, how to use them, etc. 🙂

Drizzle says hello to SaltStack

This post is about my next work, Importing Drizzle database to Salt modules. This was suggested by my mentor Patrick Crews and I thank him for giving me this golden opportunity of extending Drizzle to SaltStack. This is the first of the series of posts regarding this project.

I have done some initial work and have successfully connected to Drizzle server. I ll first tell about the changes to be made in the master / minion config file. Before using Drizzle, we have to enter the configuration parameters namely host, user, password, port and schema in the suitable config file. ( /etc/salt/master for master config or /etc/salt/minion for minion config ). If we want to use Drizzle in a specific minion, we can add these parameters in that minion’s config file. Else, if we want to access Drizzle on all the minion, then add these parameters in the master’s config file. You can just append the respective file with:

drizzle.host: ‘127.0.0.1’

drizzle.user: ‘root’

drizzle.passwd: ”

drizzle.db: ‘drizzle’

drizzle.port: 4427

Once this is done, restart the master with /etc/init.d/salt-master restart or the minion with /etc/init.d/salt-minion restart. Now you can access the Drizzle database through remote execution 🙂

As an introductory work, I have written the method for retrieving the version of Drizzle server that is running on the minion. This can be used with the command option : salt ‘*’ drizzle.version.

In my system, I got the output as :

sharan@sharan:~$ sudo salt \* drizzle.version
sharan:
VERSION(): 7.1.33-stable

So Drizzle module will be available in full swing in the fore coming weeks.. Stay tuned.. 😀

Automated Performance Testing of Minions

This post hovers over the new module that is imported to the list of Salt modules : SysBench. To throw light on SysBench, it is a benchmark tool, primarily conceived to test MySQL based databases, and then extending its region to other system parameters such as CPU performance, Thread and Mutex implementations, File and Memory access, etc. SysBench is best known for its simplicity and user friendliness.

The requisite for this module comes from the fact that, there are situations  in which we need to analyze the performance of Minions right from the Master node. A very crisp example for this is the case of load balancing. Knowing the performance of each Minions ( which is definitely not uniform in all minions ) prior fractionating the task, would lead to better assignment of Minions, without pressuring them.

This could be done by just executing the SysBench command line options using the famous ‘cmd.run’, then why this module? This module automates the entire process. SysBench CLI options take in a lot of parameters and are thus cumbersome ( probably the only drawback with them ). The module makes this drawback transparent to the users. Now the users can just execute an ordinary salt ‘*’ sysbench.<test> from their Master node. Moreover, the module tests not just one case. It tests different number of cases, varying from one test to another.

For example, the CPU test checks for 4 different primer limit values namely 500, 1000, 2500 and 5000, automation playing its part here too. Some test cases are derived using the Orthogonal Array testing to cover the whole set of cases. Moreover, the test report are parsed in a very neat manner, facilitating the user to get a quick overview of the performance.

The various tests and their corresponding CLI options provided by this module are :

1. CPU ( cpu )

2. Threads ( threads )

3. Mutex ( mutex )

4. Memory (memory )

5. File I/O ( fileio )

So to use this module, just straight away type the following command salt ‘*’ sysbench.<CLI option>.

Cheers.. 🙂

Improved Performance Regression Monitoring

As a part of Google Summer of Code 2012, I worked on Improved Performance Regression Monitoring for DrizzleDB.. At the bottom, its a Quality Assurance and Automated Testing project.. Drizzle already has *Drizzle Automation* which deals with automating QA processes, code coverage analysis and benchmarking.. However due to the following issues, its not preferred much..

1. Its not in-tree

2. Its more complex

3. Its less user friendly

In order to nullify this, the kewpie tests came into effect. Kewpie is a test-runner for MySQL based databases. Its cleaner, simpler and very much effective. Moreover, its included in-tree.. The project’s intent is to import sysbench tests from drizzle automation to kewpie. SysBench is a benchmark tool. It has benchmarks for various system parameters, which includes an oltp benchmark ( for testing database servers ). Following are the enhancements made into the Drizzle Trunk :

1. Added options for kewpie:

Options for dsn-string and mailing test reports have been included. The dsn-string specified via the –results-db-dsn option is used for connecting with a database. And if the user needs to send the report of a test to a mail id, it can be done easily be specifying the mail id with the –email-report-tgt option. You can have a deeper look over here.

2. Imported sysbench tests:

There are two variants of sysbench tests. One is a *readonly* test which shoots the database with SELECT queries and the other is *readwrite* test which includes INSERT, UPDATE, DELETE and DROP queries too.. These tests are imported to kewpie in an  ingenious way.. Instead of having the entire test code in each test case, there is a base class *sysbenchTestCase.py* in /tests/lib/util. Both the test cases, readonly and readwrite ( /tests/qp_tests/sysbench ), inherits this base class. Further, these test cases have only the configuration options in them, which is unique for both.. Peep here for more details.

3. Improved documetation:

Documentations for kewpie and sysbench were modified, and documentation for drizzletest commands was included. The latter one is not yet complete, however the api calls can be glanced at, in the documentation.. More information on each will be documented very soon..