In the world of Operations there are now many areas of focus. One of the largest growing areas is DevOps which deals a lot with automation and operational efficiency. Here at Distil Networks we have a ever growing and useful DevOps toolset we use to automate and control our network efficiently. Here is a list of Five DevOps Tools that we are currently investigating:
No true DevOps shop today is complete without using one of big four in terms of state/configuration management (Chef, Puppet, Salt, Ansible). We happen to prefer Chef. Chef is an excellent way for deploying and maintaining machine state across hundreds of machines. Need a configuration change for your web server? Need to upgrade a binary? With Chef, this change is made local, uploaded to Chef server, and then within minutes will be implemented across your entire network. Chef is a great way to have machine state described in code and versioned so when problems arise within an application it is easy to trace back through each previous state, and if necessary perform a rollback to that previous state.
If you are new to Chef or an experienced user and have not looked at Food Critic give it a read. Food Critic is a great set of standard and rules which are largely considered best practices for Chef recipe writing and help prevent a lot of problems during convergence runs.
A DevOps tool we use hand in hand with Chef is Aptly. Aptly is a great service that can be installed and configured for hosting a Debian repo within your environment. Aptly has a fast learning curve. Someone with no experience can look at the Aptly documentation and be quickly up to speed on how it works within a few minutes. At Distil Networks, Aptly allows us to easily manage and version our Debian packages (often times being deployed through Chef). With Aptly, it’s easy to set up separate repositories for staging and production environments.This way, packages are able to go through the correct QA process with a low risk of mishaps of something being accidentally pushed to production.
Above with Aptly we really discuss hosting a custom repository of your own created packages. However, one of the other really cool features of Aptly is its’ ability to pull down existing repositories and host a mirror of it internally. This works great if you need a internal Debian or Ubuntu repo for one of the many compliances available (PCI, Etc.)
How do we build those great Debian packages for Aptly you may ask? Well, with Jenkins of course. Jenkins is an excellent build and CI server. It allows for setting up the automated processes needed to build, test, and package our code. One of the great things about Jenkins is the community around it. There are infinite plugins for almost every type of repository and build system you may be using.
Not really a tip or trick, but more a pitfall to remember about Jenkins for new users. By default Jenkins has no security configuration, so if no firewall is in place or anything host side your code could be available for the entire world to see. Check out a list of ways in which to lock down your Jenkins installation.
The above three tools are largely used for automation and really embody a lot of the ideas that make up DevOps. The next two really help with finding areas that could utilize automation or increased efficiency. The first of these is Logstash, an amazing open source project that makes for easy, centralized log aggregation. Instead of logging into a 100 servers, Logstash provides all you log files in a central place. Visualizations and graphs (Kibana) allow you to quickly see what types of problems and issues could be occurring. Logstash is mostly used in conjunction with Elastic Search (for indexing) and Kibana to form the ever growing in popularity ELK stack.
Logstash when broken down is really inputs, filters, and outputs. Not going into a great amount of detail one of the best more interesting and useful outputs is the Amazon S3 output plugin. It is really nice because if you are low on space locally for saving stuff in say Elasticsearch for use in Kibana you can also stream the logs entries up into S3 buckets and then pull them easily back down to elasticsearch for later use if you need to reinvestigate a certain historical event. Again this is a great way to save a long duration of logs (multiple years) for some of the compliances that require it (PCI, etc.) without having to have a large amount of space available on site.
The other tool we use to discover problem areas is Zabbix, another popular open source project. Zabbix, upon install, comes with pre-built templates for both Windows and Linux to quickly begin receiving a large number of metrics which describe the health of a server. These can range from CPU usage to if a specific service running. The really awesome part of Zabbix is that if a monitor you want doesn't exist you can create it yourself. If you can pull the information from a machine you can monitor it with Zabbix. This service also allows for setting thresholds on the data you are pulling, so you can find issues before they affect your service and be proactive instead of reactive in your monitoring efforts.
One of my favorite features of Zabbix is low level discovery. It is an easy way to create a common template of checks that will automatically go and discovery differences on the machines (file system partitions, nics, etc) and develop custom checks for each machine based off of what it found with the discovery ruleset. If interested, read more about Zabbix here.
This is a small portion of what is available in the growing world of DevOps tools. If you have any questions and or want to share some tools with us please reach out we always love to talk tech.
About the AuthorMore Content by Benji Taylor