First, we will need to think about the language, because that will determine the operating system to install in the server. For example, choosing ASP.NET will make Windows Server mandatory, although there are alternatives like Mono that would let us work with Linux, but it’s not as complete as Windows libraries.
Any other language will let us work with Linux or Windows, although Linux is preferred because of its good packaging system.
ASP.NET is not a language though, but a framework. It can work with Visual Basic, C# and J#.
Using a web framework is a good decision, because it solves many web development problems and provides a good file structure to work with. This accelerates a lot our work.
Languages and frameworks
PHP is probably the most popular language for web development. It’s pre-installed in almost all hosting services. It has a syntax very similar to C and Java, so coming from these languages is a plus in familiarity (like in my case).
It started as a procedural language, making a transition to object orientation in version 4, and finally being a true object oriented language in version 5. Versions 7 and 8 brought more features to the language, and makes great improvements to speed and memory consumption.
Facebook is built with PHP, although they wrote some libraries and compilers to optimize the speed.
Nowadays there are many good alternatives, such as Laravel.
Python is a language that uses a simpler syntax than PHP. It’s designed to have a very readable code, and for that reason it’s very recommended to learn programming.
It’s well tested. Google chose it to develop their services, and that’s a good thing.
I haven’t used this language for any web application, but I used it to develop a tetris-clone game using the Pygame library.
The most popular framework for Python is Django.
Ruby is designed to be a fun language. As the slogan says: a programmer’s best friend. It has a focus on simplicity and productivity with an elegant syntax.
In Ruby everything is an object, and that’s interesting because it encourages to the programmer to think this way when developing.
But, in my opinion, the most amazing thing about Ruby is the community. There is a huge amount of libraries (called gems) that you can use in your projects, making development very fast. The popular gems (which are many and varied) are well maintained and constantly improved.
Twitter was built using Ruby, although now it’s rewritten in Java.
Micro-frameworks are designed for small applications, having few files and being easier to maintain than (mis)using a full framework.
It wasn’t until Microsoft developed the AJAX principles and Google exploited them with Google Maps and Gmail when the language became very popular, and interesting projects started, such as NodeJS.
Nowadays is one of the most popular web technologies, and it’s being used both for frontend (where it always belonged) and backend.
Database management systems (DBMS)
For small or medium web applications, the backend developer is responsible to install, use and optimize the database. For bigger projects there is a dedicated role: the database administrator (DBA).
MySQL is (still) the most popular DBMS for websites. It started as a stripped-down and fast tool, although it was adding more and more features until being a complete solution. It was purchased by Oracle, although there is a fork to keep the project open source called MariaDB.
Classic systems such as SQL Server, Oracle or DB2 are not so popular among web developers. Maybe more for ASP.NET in the case of SQL Server.
In the Ruby on Rails community there is a popular alternative: PostgreSQL. It’s a very powerful and open-source DBMS having interesting additional features such as full text search and a messaging system. It also has useful data types which integrate with Rails very well. This DBMS is by far my favorite.
Caching and key-value stores
It’s good to avoid premature optimization, so the application can be done as soon as possible, but as it gets used by more users, we will need to make optimizations to our code.
When more optimizations are not possible or viable, we can use caching. Caching works storing some pieces of the application statically, so they can be served without processing, being much faster.
Caching can be stored in files, in a database or, ideally, in memory. If it’s small, it’s preferable to use memory, and for that use the king is Memcached. It stores information in the form of a key-value, like a dictionary.
The downside is when the process halts or gets killed. Our caching would need to be created from scratch again. An alternative to avoid that is Redis, which works in a very similar way but it copies the information to disk so it can be reestablished in case the process is restarted.
When it comes to search the information efficiently to avoid slowing down the server, we will need a search engine. This tool analyzes the information that has to be available to search, and it stores it in files with an index to be located as fast as possible.
There are many approaches to index the information. We can for example index it as soon as it’s generated, or do it programmatically at given times. It depends on the project we are working on to choose one or the other.
The classic search engine is Apache Lucene. Besides indexing information, it can also index documents like PDFs. It can be used directly, but there are other tools that are based on it providing more features, for example Apache Solr and Elasticsearch.
Another popular alternative is Sphinx.
Imagine that your application has to process something after an action initiated by the user. For example, the user signs up, and the application needs to send a confirmation email.
Sending an email could take some seconds, so having the user to wait until it’s sent is not a good idea. Now imagine there are lots of sign ups sending many emails. The problem is even bigger.
The best approach to improve usability is to queue this task so it can be done later on, and provide immediate feedback to the user. For example you could display a page saying that a confirmation email has been sent (although it can take some time to arrive). This is known as an asynchronous task.
I have used some queue systems that integrate very well with Ruby on Rails. I started working with Delayed::Job. It’s very complete and it needs a database to work, but it lacks a graphical interface to see the queue and failing tasks, because tasks could fail!
The one I liked most is Sidekiq. It requires Redis to work, and provides a graphical interface to control everything. It can show which tasks failed and how many attempts were tried for each one. It uses threaded workers to be more memory efficient and having better performance.
Because processing many tasks could make these systems to use much memory or CPU, it’s a good idea to use a monitoring tool. God and monit are classic tools, but Sidekiq’s author wrote Inspeqtor, a very interesting alternative that works nicely with Sidekiq.
Without the help of the community and the large amount of libraries available, backend development teams would require more people.
There are many responsibilities and many areas to cover, so knowing about those areas and the libraries that we can use makes our work easier and lets us focus on the important thing: developing a custom and unique application.
What do you like especially when working as a backend developer?