Author Archive

Using Sublime like Vim

As my friends know me, I like playing with new environments, new tools to make my work easier and faster. I learnt dvorak to type fast, changed to Macintosh to have less problem with my environment and tried many editors to make programming faster. Currently I use Textmate and Sublime Editor at home and Eclipse at the company, but I miss splited window and command run from Textmate and I don’t want to use 500mb memory in Eclipse just to edit some files in a project. I really like Sublime editor, however sometimes I need quicker shortcuts for commands, like remove more lines, searching for words under the cursor, so on…

This weekend I decided to learn Vim again, as I did so many times before. Vim is a really powerful editor for programmers and for whom doesn’t like to use mouse during writing code. My goal was to forget the mouse in programming and make my work faster, as fast as the thoughts pop up in my head.

Continue reading »

Spec directory in Rails

Are you curious about what the spec directory contains in your Rails project? I’d like to write briefly about these directories.

RSpec generates /spec directory with many subdirectories when you call rails generate rspec:install in your rails application. You can extend it with special ones, but the defaults are the following:

Continue reading »

Simplecov – Measure coverage in Rails

Today I spent some hours to discover how to measure Rails application’s test coverage, found Rcov and Simplecov gems. Rcov is not supported in Rails 3.2, while Simplecov is not supported using Ruby Enterprise Edition, so after burning some neurons I’ve upgraded ruby to 1.9.3 to use Simplecov rather. It is simple, but I spended many hours to configure it, so I would share with you some issues:

Continue reading »

Clean code – How to clean an obfuscated code?

One of my good friend, Athos wrote a very good article about cleaning an obfuscated code. Many developers don’t take much emphasis on quality of his code, because they believe in over-optimized solutions and they think the code style doesn’t add more value to the software. But the problem is not the software code becomes unreadable, either you won’t be able the change the logic later. Don’t take much time on code optimization in the beginning, check this out to know why:

Test Driven Development – Meetup Presentation

On Veszpremi Technology Meetup I presented my experiences of test driven development. Test driven development nowdays is getting more used technique to occur problems in long-running projects. Now I want to write more about this technique and describe why and for whom it is so good. (on the right picture you can see Mixer on the Veszpremi Technology Meetup, I have no snap about me)

Continue reading »

Meetup in Veszprem

The Meetup project has stepped to its final phase. We have place for the presentations, I talked to the owner of the Gesztenyes pub to she accept coupons for beers, created twitter and facebook accounts. Now I want to show you the poster of the event:

I hope you like it!

Header files

On Friday we had a short debate in the room with the guys about the role of header and cpp files. I didn’t support to separate unit tests to header and implementation part, because I guess in this case it’s not necessary. Why?

Data types (i.e. classes) have abstract and concrete aspects. We store the abstract aspects in header files (using c++ this is the .h file) and the concrete aspects in implementation files (cpp file). In the header we define the data type’s signature, export interface (public part of a class), parameters, methods, properties. Using C this kind of approach is shown better because header files can be separate from the implementation well, while in C++ sometimes you must store implementation specific properties in the header (usually in the private space).

The more information we define in the header file in connection with the implementation, the harder to change the implementation later without changing the header file. It can be a problem in a complex and huge software, where changing something in a deeper layer can result more hours compilation time.

Continue reading »

The new server – part 2

As I’ve mentioned my server provides Ruby On Rails service for companies also, so I’d like to write in short about Rails hosting. Rails is a framework written in Ruby. There’s a ruby application repository called rubygems like apt-get. So when you want to upgrade your Rails, as in apt-get, you can do it by the gem management application. The current rails version is over the 2.0 release and rubygems is over the 1.3. When we were working on the sites (those are being hosted currently), we used Rails 1.2.3 and rubygems 0.7. Today these versions are unsupported, and the upgrade is not so trivial.

Continue reading »

Backup mails

I ran into a problem, when I built my new mail server environment. I used the backup files to create the maildirs of the accounts, and I experienced my mailbox was empty in my imap client, however there were a lot of mails in it.

maildrop -V 3 -d

throwed the next message:

Unable to open mailbox

despite of permissions was good.

People are disposed to exclude tmp directories from the backup files, but tmp is very important in maildir. Without tmp directory the mailbox looks empty. So modify your backup script to store the tmp directories without content to avoid the latter headaches.

The new server

I’ve been managing a server for 3 years in a data-park. That time we had a small project that was made in Ruby on Rails (it was a pilot project). Rails deployment wasn’t so easy, however these days there are good solutions with apache and nginx specific passenger module. So we didn’t have any choice, we invested into a machine and took it to a Data park. After the work I stayed alone, and the management of the server stayed on me. Currently it’s hosting five sites with email and domain services.

It’s running Gentoo, the reason is I like specifying those features of a software that I would like to use (I don’t need graphical extension for git with x11-common package on a webserver), and other package managers usually try to find out what I should need (and usually it doesn’t succeed). The drawbacks are compile and install all the necessary packages are very-very slow, and you need to have a gcc or g++, which causes security problems.

As I mentioned, the machine has been working for 3 years, so I decided to change it (although it’s configuration would be enough, but change the whole system without any longtime stop is not so trivial). My new configuration is (thx for Imi @ Balabit!):

  • Intel Core2Quad Q9300 2500 Mhz
  • Intel DG45ID motherboard (it’s a desktop board)
  • 4 x 2Gb Kingmax memory slots
  • 2 x 500Gb Seagate Raid edition disk, ST3500320NS
  • Chieftec BH01BBB400 box

After a lot of tests I decided it would run Debian, but exactly it’s only the base of the system. The system contains lightly separated racks, where each rack has one specified function, but share some common resources with each other. I’m going to write more about the construction of racks, by the way of the introduction they are simple chroot environments managed by Gentoo’s package manager, Portage.

You can ask me why Debian, why not Ubuntu or Gentoo. I’d like to build a thin layer, which handles only the hardware and the basic system tasks. I’d like to create a system in an hour and not in a day. I guess Ubuntu is rather a desktop instead of a server operating system. Ubuntu has the newest packages of a software, but in a server I prefer security to the rapid package building.

So, I chose software raid, I didn’t want to pay a lot of bucks for a hardware raid. I can have problems at blackout, but in a Data park it is very rare.

So my partitions:

  • 150Mb /boot, ext2 (RAID1)
  • 8Gb Swap | /tmp (sda|sdb)
  • 5G / , XFS (RAID1)
  • 453G logical partition with raid1 and lvm

Different partitions on lvm:

  • /usr XFS
  • /var XFS
  • /backup XFS (for local backups)
  • /racks XFS (storage for the racks, it’s comming later)
  • /home XFS (with quota support)

I wasn’t so brave to take the / partition to a logical volume, but I hope 5G will be enough (although we know Bill Gates’ famous phrase about memory consumption)

I’ve installed the following software to the base system (without editors and other non-important stuffs):

  • sshd (remote management with chroot support)
  • mdadm (raid management)
  • smartmontools (monitoring hard disks)
  • syslog-ng (for logging)
  • git (version control for /etc and rack manager software)
  • backup-manager (creating backup from /etc and the racks’ rw partitions (later) with remote upload support)
  • jailkit ( I’ve built a debian package in a buildd debootstrap environment because I found it only in source version, however it’s a very good tool to build and manage chroot environments)
  • python (for jailkit and for my rack manager)
  • aufs (for the rack environment, but I’m thinking about funionfs because aufs is a kernel module while funionfs is running in user-space by fuse. So at a crash only the fuse dies, not the whole system.)
  • squashfs (for creating racks and snapshots)

The rack system will be in the next part…