This is the 200th article in the Spotlight on IT series. If you'd be interested in writing an article on the subject of backup, security, storage, virtualization, mobile, networking, wireless, DNS, or MSPs for the series PM Eric to get started.
I’ve always been intrigued by how companies run their backup processes. Every company I’ve worked at — no matter the business — has used some type of tape backup system. This could be a fluke or something, but it was just my experience. (It could have something to do with everyone using the iSeries.) And, it is such a simple process with very simple actions: walk down to secure location, gather tape(s), put it in and repeat each day until eternity. Call it lazy or call it procrastination, but we finally reached a point where we needed to upgrade the tape system or be done with this process entirely.
A little over a year ago I evaluated a number of options, but I felt trapped with some type of tape solution — at least on our iSeries-based ERP system. For those unfamiliar with the iSeries, it’s a midrange server designed for small businesses. It’s also commonly known as the AS/400 and the main OS it operates on is the OS/400. These types of systems can make it difficult to backup the operating system with anything but tape. So, I set off to create my own internal disk-to-disk solution.
That decision was the easy part. The next step was to build a concept and design. I decided to just build a cheap server with a ton of storage. It didn't have to be fast; it just needed enough redundant storage. Once I had the server configuration in mind I needed to figure out how I was going to back up the server.
So I needed to divide this into two phases. The first hurdle was the operating system, during a full backup the OS is backed up with the current PTF level. After some time spent pulling my hair out and some discovery, my researched lead me to the concept of virtual tapes and I began tinkering around with creating save files from that. After some trial and error, this appeared to be the viable solution.
Once I was over this hurdle, I had to tackle the second one — I needed to resolve how I was going to back up the data. This needed to be simple, secure, and safe. Fortunately, I had a good idea how I was going to accomplish this.
My design was to create iSeries save files. These are compressed files that allow you to save single or groups of objects to one file to be exported simply. The method was to build a CLP (command line program) on the iSeries that creates a save file, copies all of the objects to the save file, then calls an FTP script to move the object to the new server.
The beauty of that is once it’s off the iSeries, you would have to know the library names I saved it from and what library names I saved it to. Also, to add another level of security you would need an iSeries on the same release and PTF level to get the data. Once I had this developed, I created a scheduled job that ran each night in the wee hours of the morning.
After all of this was complete I had to resolve what I was going to do with the network. We don’t have a ton of data, but we were backing up with an older LTO3 that was having issues. This led to questions: How was I going to integrate the two concepts, and, with all of my data being stored locally, what would I do if my office was hit by a meteor?
I decided I needed to look into offsite storage and how I could securely transport my data offsite. I wasn’t too comfortable sending my data to a random place and washing my hands of it, so I did my research. Luckily, I was able to find a data center that was close enough to my office but not close enough to be incinerated by the same meteor.
I contacted ITS (Iowa Data Centers) and was able to go there and be granted a facility tour. To my surprise this place was immaculate — the perfect gem hidden from the world. This place had everything: redundant power sources, failover generators. The facility is built into a hill at an elevation safe from extreme flooding. The data center walls and ceiling are constructed with 12-inch thick reinforced concrete. This place is solid! They also had their own backup solution that they sold as a package. This solution happened to be just what I was looking for.
We have their solution set up to keep an incremental offsite copy and keep a seven-day retention on our data, plus an image every 30 days. So, if a user messes up a document and saves it, they would have to forget about it for a month before it would be lost.
The transmission is done via SSL tunnel and the data is stored with a 128-bit AES encryption and remains encrypted on their servers. This solution could be installed directly on our servers, so I didn’t have to conjure any other scripts for the network. It also integrated with Exchange, MySQL, and MS SQL, which was key for the rest of the network. To make the situation even better we were able to leverage a few of their other offerings. For example, webhosting and training centers for ERP upgrades. Their restore process is simple and fast, but I still needed to design my own internal recovery process.
The network was simple to save files and databases, but I needed to test the restore piece on the iSeries. During the initial design, I built an FTP script to send the data back to the iSeries to my save library. What I needed to do was test the entire process. I performed the full save, pushed the objects to the server, and sent them offsite. Then I restored the files from the offsite location, pushed to the iSeries, and performed the RSTOBJ commands. This was simple and worked as expected, so I documented the process and put it into production.
Now that we have had this system in place for a year, I can tell you I haven’t looked back to needing to retrieve tapes each day. There are some major positives to this approach. First thing I can tell you, from a financial standpoint, we are better off. We don’t need to purchase tapes, drives or software, which is a significant savings. Secondly, I have peace of mind knowing I have high availability when it comes to our data. We can restore simply and can test this across multiple variations of our data.
If you're interested in my approach, I created a how-to in the Spiceworks Community for a few of the pieces. Some of this will need to be tailored for your specific need but will give you an idea and outline the iSeries piece. I’m also planning to work something up outlining the SAVOBJ and RSTOBJ commands just to provide a brief overview for those interested. If you’d like more detailed information feel free to PM me.
---
What do you think? Share your thoughts, tips and backup-related woes in the comments below!