Eight Rules for Effective Software Production

During the course of my career, I’ve participated in multiple real life software projects and observed how things are done on all levels: decision making, practices adoption, team building, recruiting, skill distribution, etc. Obviously, different approaches yielded different results. Being an improvement-oriented type of person, I noticed and collected the most effective practices and best practical tricks to help me up in my work.

Learning from observation is a hard and lengthy way to do it. I would be extremely happy to pick this knowledge earlier from books instead. Unfortunately, I found none on the topic. So I decided to share my experience with other seekers of this kind of knowledge. Hopefully, it’ll save them few years of personal research.

In this article, you will learn how you can beat the average industry performance by producing sturdy and reliable software products that require 5-10 times less maintenance. I can say without false modesty that in the past 10-15 years, I (personally as well as my teams) have exceeded all expectations, leaving a trail of successes behind. Managers cannot be happier.

8 Simple Rules for Effective Software Production

Once my team pulled up an important project in an impossibly short time frame, for which we were given a “High Performance Team” award by higher management. All this without staying nights and weekend exhausting ourselves. Just normal work.

You see, effective software production knowledge itself is a power. In fact, it is sort of black magic that not many people can grasp even when it is explained in plain words. You’ll get it for free. Read on if you want to be perceived as software production magician.

Who Is This For?

Let me make an important, important, important disclaimer here.

I address this to those in need of a productivity boost. Not everything in life is about productivity. Not all software projects either. There are cases when you are not judged on your performance. Obviously, these practices wouldn’t help then.

These techniques will be most useful for team leaders, architects, and project managers, although senior software developers can benefit from them too.

Rule No. 1: Understand the IT Mentality

The IT industry is a mix of science, technology, art, and business. It is quite difficult to navigate there without understanding these aspects on a good enough level. The biggest problem is that the industry itself is quite complex; therefore, best practices are complex too. You need to learn a lot and verify your knowledge by practicing a lot to succeed.

The incredible technology update rate makes it doubly tough. Nothing you learned ten years ago is needed anymore. So you need to learn at an accelerated pace.

Summarizing all the above, we can say that succeeding in the IT field is not based on innate skills or feelings but on hard practical examples. Never ever “follow the gut” or believe solely based on feeling, including yours.

The best practice in adopting new ideas is to verify that somebody did it before and it worked.

If yes, the idea is worth considering. Otherwise, demand a very good and very detailed explanation on how choosing this path makes your team’s life better. If it passes this test, schedule a lightweight proof of concepts project that experimentally proves it fits into your environment.

The important thing to understand here is that there are no right and wrong solutions because there are many ways to solve software problems. However, there are good and bad understandings of the solution.

If a person can clearly articulate idea in detail or draw a link from adopting this solution to the team’s success to persuade other team members, then we can rely on this person’s vision and hope for the high chance of success.

Rule No. 2: Do Not Mix Software Production and Development Methodologies

Software production is based upon software development. However, these two have completely different goals, mindsets, and practices. Trying to solve a problem from one of these realms with the methods from another produce ridiculous results. It is important to understand the distinction and to use appropriate methods for each of these worlds.

Software development is a combination of art and craft. The art component will always be there regardless of automation tools and methodologies out there. Therefore, solving development tasks requires maximum concentration and shielding from all other distracting signals.

The best way to motivate an experienced developer is to present them a task in its pure technical form with all human factors excluded. All requirements should be also expressed in technical language. They should be easily verifiable to allow them to navigate toward the goal during their solo research phase.

Software production, in contrast, is more in the business administration domain. You know what your customer needs from one side and you know what team resources you have at your disposal from another. So now you try to direct your team efforts to reach the goal. Meanwhile, you can also estimate your progress speed and present a well-versed plan to the boss. The important skills there are understanding your customer’s wishes, understanding your team’s strengths, and communicating formal plans and schedules.

This being said, I’d like to highlight that many roles in software development are working in both of these worlds—in building a bridge between business and technology—such as team leaders, architects, analysts, and project managers. People in these roles should be able to walk easily between two planes of reality and understand when it’s time to talk business and when it’s time for art.

Rule No. 3: Use Persistent Storage as an Extension to Human Memory

Human memory, although amazing in capacity, has its limits. You remember things with unpredictable accuracy and duration, and when you forget, there is no way to recall it at will.

That’s why we use persistent memory storage to move along at a predictable speed. This is not about formal documentation like user’s manuals that you create long after the fact and for other people to use. This is about using documents literally as your memory’s external extension during the work that helps you to go through the process.

I recommend that you document your thoughts and plans along the way whenever you are facing non-trivial tasks or the tasks that involve more than one person. Try to make it as cheap as possible. Don’t create formal documents with company logo on it. A good tool would be a company wiki with your project space in it. Create dedicated pages for the tasks or problems (30 seconds). Then update it every time you’ve got an idea or you are about discuss it with others.

Take a pause in conversation and update it immediately while you still have this thought flying in your head.

In a meeting, say “hold on, let me put this down” and spend 10-30 seconds to express it in writing. The writing should not be extensive, but it should be complete and coherent, like you are transferring the idea in its entirety onto the paper. Later on, you or anybody else reading your passage should understand it as clear as you understand it right now. That trick saves a lot of time, yet allows you to document on the fly.

This technique serves two purposes.

First, it locks your progress on the way to success by pressing it hard into the stone. No more risks of somebody forgetting something, reiterating same thing again and again, or renegotiating same thing that was already negotiated.

Second, you clear your mind, dumping out the problem that was nagging at you. Now your mind is hungry for the next challenge. What a productivity boost!

This applies to any size of project or task. For bigger ones, you’ll just have larger spaces with a hierarchy of pages that grows gradually as your project evolves. The important concept here is to prepare a documentation space and structure before you start with your task to establish a quick memory dumping protocol!

For people favoring technological analogies, I would compare our mind to a processor with immense processing power but limited operational memory. You essentially can think about one thing at a time. In this analogy, your documentation serves as persistent storage, whereas your mind solves complex problems in an iterative approach. At some point, you decide to start the next iteration, read previous documentation, and load the current state in your operative memory, thinking about it for a while and updating code and documentation with your new findings. Step by step until it is complete.

All above being said, I do not encourage people to process a lot of tasks at once. On the contrary, the fewer tasks you have, the better. Not many work situations are truly human-optimized, though, and multitasking may be required and you have to handle it some way. The above trick helps to handle it better.

Rule No. 4: Stop Wasting Time on Formal Time Estimation

No two projects are alike. The next time you do a similar project, you will have different customers, different goals, a different team; maybe even different technologies. Even using standard tools and components, you will still need to customize their configuration and architecture. When you handle software projects, keep in mind than they involve somewhere between 50% and 100% custom work. They require research, discussions, thinking, trials, and other highly unpredictable activities. In practice, you may experience an enormous difference in what appears on the surface to be the same exact project type and what you’ve done before. A new project type, by extension, is almost impossible to estimate exactly.

If it is so unpredictable, than how are project managers supposed to present a project schedule and stick to it?

There is one formal method of doing this described in the literature; namely, to split the whole project into smaller steps, estimate how long each step takes, and then calculate total project length by combining the work length of individual pieces. There is tons of theory behind this method taught in MBA courses.

Unfortunately, though, no amount of math can save it. This method is notoriously inaccurate, to the extent that it is completely unusable and impractical, not to mention how incredibly time consuming it is. I never saw a project manager that was using formal calculation methods without any adjustments, not even among methodological fanatics. Not even when the company strictly imposed such method usage! In fact, the best performing managers use completely different alternative methods, as described below:

A good project manager tunes up their gut feelings by studying and observing lots of past projects.

A good project manager takes notice of the project type, environment, resources involved, organization type, and all other work aspects influencing the actual project length. Of course, nobody needs to learn solely from their own mistakes. Such observations can be done both directly and indirectly; for example, through books or by studying projects done by other people or even by merely passing the knowledge person to person. Such statistical top level knowledge improves personal project schedule navigation.

I would like to highlight two important consequences of the above described method.

First, estimation accuracy improves with experience. There is no possible way an inexperienced person armed with whatever strong methodology they have can be good at it. Second, even the best estimate is still good only in statistical terms. That is, one can say that a certain project may take somewhere between four and twelve months. Supposing this is correct, one should understand that there is a 50% chance the project will run over its eight-month average time.

Understanding the statistical prediction has such an incredible effect. A wise manager would just put a twelve-month estimate on a project like that and then wow everybody by completing it early. In other words, nobody would expect a team to follow the project schedule to a day.

The general advice to project managers and their bosses would be to stop wasting time on formal time estimate methodologies. Instead, encourage the collection of statistical information about project duration and share this information across the company. I know companies where such an approach was implemented on a company-wide basis, resulting in miraculous predictive precision.

Rule No. 5: Understand the Cost of Switching Tasks and Juggling Priorities

The human mind is amazingly engineered by nature. Even though it is incredible, it has its limitations. In other words, it designed to excel in particular areas and in a particular type of task.

A developer’s mind is definitely a great asset in software development. Would you, as a project manager, be interested in understanding it better and put it into a position where it performs the best?

Let’s put it in simple terms, avoiding too much theory. Remember, theory only takes you so far before you need to learn from experience, either your own or from others.

The human mind has strong problem solving and idea generation potential. Unfortunately, it is not always possible to tap into this potential, mainly because to support idea generation, you need to keep all pieces of the problem together in your active memory at the same time. That’s why solving complex problems go through a simplification stage when a task is generalized or reformulated to cut out unimportant pieces and to decrease the number of elements kept in memory simultaneously. In other words, we can either solve one very narrow complex problem or multiple simple problems.

The ratio is not linear, though. Increasing the number of things you do simultaneously drastically impairs your problem-solving abilities. That’s why humankind always employed and will employ role separation as a life optimization. Two people working separately on two tasks will make a breakthrough quicker than if they both work on both tasks at the same time.

Another important human mind trait is its inability to switch between tasks immediately as computers do. Indeed, you cannot just stop thinking about something at will. You can’t immediately start thinking about a new concept at your full speed either. That sort of mental inertia is perfectly illustrated by air traffic control operators. Even though they are looking at the radar and seeing the whole picture, they still need to load it in their memory to operate quickly. It takes ten minutes for a new operator to watch the screen before they can replace the old one at a shift change.

What makes it worse is that we cannot forget things at will. Everything we’ve learned stays with us and just gradually fades with time, occupying space that we could use for new knowledge. And even worse, this effect is compounded by an “unfinished business” feeling at times. It is much easier to forget something that is completed and which you won’t ever need in future. Whereas when you put things aside to finish later, your brain naturally clutches to the information marked as “for future reference,” unwilling to let new knowledge take its place.

Okay. Now that we’ve outlined the idea of switching tasks, let’s see how it works in a real-life (so to speak) thought experiment.

Imagine you have your ten regular developers working on ten regular tasks—one task per person. Assuming we can enclose them into a perfect distraction-free environment, each task can be solved in a certain amount of time by a single mind. The whole thing will take as long as it takes to complete the longest single task.

Now, let’s repeat the same mental experiment. This time, we will be constantly switching task assignments between the developers for no (important) reason. Every day, each developer gets a new task to work on. Even better, let’s switch it up every hour. How soon will they finish, do you think? Maybe never.

The project manager in the first scenario was able to execute the project effectively. The second managed to “execute” it, that’s for sure…in the sense that they facilitated its death. Congratulations. This technique of project killing is extra effective because, on top of mere time wasting, it also drops employees morale to zero.

When people experience this kind of “task carousel,” they lose all sense of achievement and realize that this project is going nowhere.

Most people would agree with above example when it is presented to them in a purely academic way like that. However, in real life they suddenly forget everything under the slightest pressure. The big boss demands a progress report, or the customer is asking about a certain feature implementation date—nearly any external event can make a project manager rush to the team and express their concern, followed by a flurry of task reassignments and priority juggling in an attempt to win a bit of time here and there, ultimately resulting in nothing but throwing the project off track even more.

That is a real-life scenario that occurs quite often, unfortunately.

A good manager stands up and shields the team from such minor disturbances by absorbing the emotional shock-wave and converting it into productive future discussion items. That is definitely hard emotionally, but it’s the only way to keep the team in good working rhythm and to let them deliver.

Rule No. 6: Use Architecture Reviews as a Way to Improve System Design

The IT industry operates with notions of over– and under-engineering. When it comes up in an interview, everybody says that over-engineered is bad. That one is easy to answer because the question itself conveys a negative connotation of “over” which means “too much.” The real practical know how would be to recognize when your architecture becomes over engineered and to avoid it at early stages.

Let me give you few of my tried-and-true pointers on that.

First of all, the solution can be counted over-engineered if there is another simpler solution delivering all of the required functionality. That means if you don’t know a simpler solution, then whatever simplest solution you can offer is the best one in your eyes unless someone proves you wrong.

If our imaginary architect genuinely strives for solution perfection, the usual architecture review guarantees it is optimal enough. Unfortunately, architecture review requires at least a few other qualified architects. It runs a danger of being unavailable or impractical in a lot of cases. In the absence of peer review, architects are prone to common mistakes. Let’s review them one by one and discuss possible remedies for each.

One of the most popular mistakes is designing without a business aim in mind. It seems obvious that any work activity should be tied to the end consumer’s satisfaction or company success or a similar business need. Yet often, you can see architecture designed in whole or in part without such purpose in mind. The reasoning is either absent or boils down to using as many modern bells and whistles as possible.

The architect in this case doesn’t do what consumer has paid for. Rather, they play with cool toys for their own fun and pleasure. This is in no way acceptable in formal industry. And yet, it seems to happen quite often anyway.

Sometimes, the problem lies in the architect’s personality and their obsession with certain technologies or tools. They just like to use them and cannot coherently explain what business need they are trying to solve. Close to that is another case when people know nothing besides building something from small pieces. Surely, any external event triggers their urge to dive into the design world and never get back to a real client. Even though the initial trigger may be valid business input, their prolonged detachment from reality diminishes their artwork’s usefulness.

The cure for this is very simple, but requires self-discipline. A good architect should never touch pen and paper until they can clearly and honestly answer for themselves why it is needed. Such a need could come in different forms. It could be a formal requirement, an implicit need for product improvement, or emergence of new technologies that render the previous design less effective. In any case, it should not be a formal trigger as long as architects themselves understand the driving force. Then they can use this force as an ultimate verification of their design quality.

Another harder detectable problem is related to block architecture thinking. People with such a mentality believe that there is a solution for everything and said solution is always implemented as a building block. In other words, they translate functionality to components directly without evaluating architecture as a whole. They may just attach a component delivering the desired functionality to the system when a need for such functionality arises. Most of the time, this satisfies formal requirements but leaves the system in an incoherent state. The new block wasn’t selected on the foundation of existing system compatibility for development, support, or even the company’s licensing model. So, the team tries modifying the existing configuration or implementing this functionality via existing system capacity. As a result, the system support and maintenance gradually turn into a convoluted nightmare followed closely by performance degradation.

There is no simple solution for this problem, as engineering the system is an art and it is never possible to predict whether a new component has to be added or can be avoided. The best practice probably would be to keep a backlog of maintenance and architectural problems accumulating over time, followed by periodic reviews of overall system architecture. This periodic review may also bring emerged technologies into consideration. So the general purpose of architecture reviews should be not to fix problems but to assess the potential viability of desired improvements and of the system as a whole against the looming inevitability of obsoleteness.

Rule No. 7: Value Team Players

Most IT industry professionals were asked on an interview whether they are good team players or whether they work well in a team. Yet, probably nobody ever saw clear definition for it in literature. Obviously, such a person would contribute to team success in general, but few people can actually define distinctive personal qualities assuring such success.

I observed many people working on different levels and saw how their personal qualities influenced project progress. I would like to present the following pointers regarding personal qualities that are most helpful in teamwork.


The first and foremost quality, of course, is an ability to communicate.

Imagine a person with zero communication abilities. Surely receiving no feedback from team members renders them completely useless. This is so obvious that nobody is actually measuring this skill during the interview, which implies that the skill is at a good enough level as long as person can talk well.

Communication is not a binary yes/no skill; it is more of an information transfer window. The wider it is, the faster and clearer the information exchange.

Communication skill is a multiplier for all other skills the person has.

Since the range of that window’s openness varies greatly across the population, the measure of such a window’s width is an important characteristic of a team player. Keep in mind that we are talking about conveying information in this context but not about smooth talking or encouraging people or motivating them or organizing them through talking and communications.

Communication style is also irrelevant. Information can be delivered orally, textually, in pictures, or in a mixed way. The person can talk fast or slow. They can be friendly, like looking into your eyes and smiling all the time, or they can look away and speak in a monotonous voice. The style may affect your personal perception of your coworker, but as long as you clearly understand what they mean, any style is sufficient.

Let’s move to practical cases on detecting and measuring communication abilities.

There are two major aspects to communication skills in general. First is a willingness to share information. Some people are eager to share but others try to conceal information. That inclination is more or less natural, but can be changed slowly with self motivation and training. It is safe to assume that the person displaying one sort of information sharing willingness would demonstrate it in future too. That’s why that trait is good for predicting a candidate’s future success in a team.

In real life, people trying to conceal information are easy noticeable. They usually try to give away intentionally useless information instead of anything that actually needed. Or, they ask preliminary questions to turn the flow of information inward and minimize their answers to a “need-to-know” occurrence. Whatever their tactics, you will feel in the end that you didn’t get the desired information from them, or that getting the information required too much extra effort.

It is important to understand the intent, as some types of open person may ask you preliminary questions to better understand your question and then deliver the answer for you in the most convenient way. The person intending to conceal the information will ask additional questions just to steer conversation away and never answer your initial question instead.

Another part of communication skill is an ability to tune to the listener.

Different people have a different level of topic understanding, different communication style, and maybe even different interest in specific details. Some communicative smart people would vary their conversation flow depending on listener ability to understand it and prepare their answer to deliver specific information. In such preparation, some preliminary questions may be asked to narrow listener interest down. The ability to “work out” the communication differences is really great skill, as it allows us to achieve understanding in almost all the cases. Flexible talkers, on the other hand, may at times just be stuck in unsolvable dead ends of misunderstanding.

Understanding Strengths and Weaknesses

Let’s focus on another personal quality essential for a team player.

Most people would agree that a team environment should be more friendly than the average surrounding world to foster collaboration and boost productivity. Therefore, it is important for a team to understand each member’s strong and weak areas to distribute tasks properly and to cover weaknesses with strengths. The first step on this path is for all members to honestly measure their skills against each other. Psychologically, this may be tough as we naturally tend to conceal our weak spots from others, protecting ourselves.

This is where the friendly team atmosphere comes to help.

Building trust is a two-man job.

So the new member is expected to play by team rules. Unfortunately, some people cannot lower their guard even in a friendly environment. They behave like lone wolves throughout their whole life. This is stronger than them. Sadly, such attitude doesn’t contribute to team efforts.

There is an easy technique to recognize such lone wolves on the interview. They never ever admit they don’t know something. Of course, people like to show their best, showing out all of their abilities and trying to solve every single hard problem. Yet, there is a knowledge limit for everybody. Our limits shape our skills. Not admitting limits means that the candidates present themselves as a Jack-of-all-trades, equally good at everything and nothing.

When you hire a specialist, you probably would want to avoid such people. Besides, that arrogant attitude is often comes with another red-flag trait: unwillingness to ask for help. The ability to ask for help is absolutely essential to teamwork. Without it, we cannot progress and develop as quickly. Such a stubborn person would burn company resources and time, indefinitely fighting difficult tasks but never calling on teammates for help. There is an easy trick to detect such candidates on interview. Ask them a question that doesn’t make sense or mention some nonsense term. A normal, ideally curious person would just say they don’t know and ask what it is. A defensive person would never do that, even if you highlight that there is no right or wrong answer and that “I don’t know” does not disqualify them.

Rule No. 8: Focus on Teamwork Organization

There is as little information on teamwork organization as on any previous topic above. Everybody knows that teamwork is better, but how to build and maintain a team remains a mystery. However, even if it is impossible to cover all aspects of team building, we can at least explore few key team building techniques here.

Building Expertise

Each IT project is unique. To be successful in it, one needs to learn and master project specifics. They may include both technical and non-technical knowledge. An example of non-technical knowledge could be a personal network for management, customers, technical support teams, etc. Special technical knowledge is additional details on top of general IT skills.

For example, you may need to know Maven to build a project. However, the exact build structure, the location of properties and filtering will still vary per project. As with any type of knowledge, mastering such details takes time; the more time one invests in it, the better they can perform. Time is the most valuable resource you have. You want to keep your team member focused on the same functional area for as long as possible to capitalize on their expertise and develop them even more, thus constantly improving team performance.

Unfortunately, it is not possible to do it indefinitely. From the one side, people may just get bored. From the other side, you are running a risk of losing their expertise, unexpectedly putting your project at risk.

Let’s see if there are ways to cope with the downsides without impeding team performance much.

Most intellectual workers are natural learners. They would like to learn a particular area until they excel in it.

Distribute focus areas between team members and let them build up their expertise in them. At some point, they reach a high enough level that makes sense in the scope of this project. Extra learning effort won’t significantly improve it at this point. Without motivation and challenge, smart people grow bored and start hating their job.

Prevent this by opening learning possibilities elsewhere for them. Keep them informed of projects and the company’s technological stack, and open up new challenges. If their interest lies within the project’s scope, you get the double reward of keeping your team challenged and extending useful team skill-sets at the same time. However, any self development is good to avoid boredom, even if it doesn’t intersect with immediate project needs. As long as the team expert brains are engaged, they keep supporting already-learned project areas in the back of their minds.

When leaving the company, team experts take a big portion of expertise with them. One of the countermeasures you can use is to widely disseminate documentation that can be updated on the fly. Think of the “persistent memory storage” mentioned earlier.

Still, a project manager would love to duplicate knowledge in team member heads if possible. Having two of every expert would be a simple solution for that, albeit twice as expensive. But there is a leaner way to do it. The trick is to let most of your developers develop expertise in multiple areas, so that each project aspect is covered by at least one deep expert. At the same time, chose a few senior members to grow the breadth along with the depth of their expertise. Usually, this role is best played by a team development lead or an architect. The team lead interacts with all team members and participates in all task implementations. They can tap into each and every aspect of the project, understanding all of its internals. This way, even if you lose one of the experts, the leader can take over and keep on the project progressing until you can hire and train the replacement.

Another flavor of same idea is to cross-train developers in adjacent areas, letting them to observe, learn, and occasionally try their peers’ work. Keep in mind that this cross-training needn’t be extensive, so it doesn’t disrupt focus on developers’ primary tasks and doesn’t impede productivity. Developing expertise across leadership and cross-training developers builds you a cushion of protection against unforeseen life culprits and allow you some time to maneuver with project resources.

Minimizing Distraction

Software development is a chain of complex and creative problem solving. Even though, with industry development, these complex problems get more and more automated, the work doesn’t become easier. It still involves a large amount of art and individual insight, which is very hard to predict and sometimes even harder to wrangle.

Developers are the edge of the weapon. Their concentration is equivalent to the hardness of the weapon’s tip. Increase their focus and you’ll cut through problems like a hot knife through butter. Distract them and you’ll end up clumsily poking at the butter with a blunted stick.

This cannot be over emphasized: Non-problem-solving work can be intensified with motivation or extra effort. For problem solving work, you need maximum detachment from the mundane world. It is difficult to leave all everyday problems behind, and a good project manager should build a quiet development environment in both the physical and mental senses. A developer’s workplace should be analogous to a sensory deprivation tank.

Physically, this is implemented as a closed work space. Every team member should have a cube at least where they can dive into solitude. It is preferable to avoid loud noises and aisle conversations. Quick interpersonal communications should be maintained by emails and chats. Large groups should use closed rooms for their meetings to not distract others. This is a pretty standard picture for office life as it used to be.

Unfortunately, the open space paradigm is being adopted more and more widely even in large offices. I would warn against his fad. Even worse, together with open spaces, top management encourages in-place team conversations. That is, in essence, shouting to a person in another row over an uninvolved team member’s head. A developer that is interrupted by loud conversation twenty times a day won’t have a shred of concentration left. Certainly a major performance killer.

Allowing for a Learning Curve

Knowledge itself is power. This is especially true for such a complex industry as IT. Every task here has its regular cycle: learn, research, implement. The learning phase in particular is invaluable. Not only does better understanding allow better and faster implementation, there are certain knowledge thresholds that must be passed in order to achieve something at all. Of course, there is no point in over-learning either. Each skill should match the production need and not much above it.

However, often, developers are pressured in the opposite direction; to stop learning and to do nothing but produce. Learning is perceived as wasted effort, as it doesn’t move the task progress bar. This seems like a really easy problem to solve, sitting at home and reading this article. Yet in the real life, the slightest work pressure turns every manager into that demanding idiot who insists that everybody “stop learning and start doing something already.” I swear, I have heard those exact words so many times throughout my career… A good manager and team leader should understand that learning is an important part of the process even if it doesn’t directly increment the progress bar.

Building a Competitive Development Workshop

The tips and tricks presented above are a subset of effective software production expertise. By understanding and applying them in real life, you’ll improve your production effectiveness little by little. If you think they are largely unconnected and lacking a theoretical base, you are absolutely right.

I would like to highlight that building a competitive development workshop is not a single-discipline task. It requires knowledge and expertise in multiple adjacent areas. Building such expertise is a hard work. There is no single theoretical base or idea that would solve all your problems at once. Believing in such a silver bullet just serves to distract you from the real goal.

Try out these tips at work to see if they are worth adopting permanently. If you find them useful (or not), leave a comment below and share your experience!

This article is originally posted in Toptal.

Getting Started with Elixir Programming Language

If you have been reading blog posts, hacker news threads, your favorite developers tweets or listening to podcasts, at this point you’ve probably heard about the Elixir programming language. The language was created by José Valim, a well known developer in the open-source world. You may know him from the Ruby on Rails MVC framework or from devise and simple_form ruby gems him and his co-workers from the Plataformatec have been working on in the last few years.

According the José Valim, Elixir was born in 2011. He had the idea to build the new language due the lack of good tools to solve the concurrency problems in the ruby world. At that time, after spending time studying concurrency and distributed focused languages, he found two languages that he liked, Erlang and Clojure which run in the JVM. He liked everything he saw in the Erlang language (Erlang VM) and he hated the things he didn’t see, like polymorphism, metaprogramming and language extendability attributes which Clojure was good at. So, Elixir was born with that in mind, to have an alternative for Clojure and a dynamic language which runs in the Erlang Virtual Machine with good extendability support.

Getting Started with Elixir Programming Language

Elixir describes itself as a dynamic, functional language with immutable state and an actor based approach to concurrency designed for building scalable and maintainable applications with a simple, modern and tidy syntax. The language runs in the Erlang Virtual Machine, a battle proof, high-performance and distributed virtual machine known for its low latency and fault tolerance characteristics.

Before we see some code, it’s worth saying that Elixir has been accepted by the community which is growing. If you want to learn Elixir today you will easily find books, libraries, conferences, meetups, podcasts, blog posts, newsletters and all sorts of learning sources out there as well as it was accepted by the Erlang creators.

Let’s see some code!

Install Elixir:

Installing Elixir is super easy in all major platforms and is an one-liner in most of them.

Arch Linux

Elixir is available on Arch Linux through the official repositories:

pacman -S elixir


Installing Elixir in Ubuntu is a bit tidious. But it is easy enough nonetheless.

wget https://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb && sudo dpkg -i erlang-solutions_1.0_all.deb
apt-get update
apt-get install esl-erlang
apt-get install elixir


Install Elixir in OS X using Homebrew.

brew install elixir

Meet IEx

After the installation is completed, it’s time to open your shell. You will spend a lot of time in your shell if you want to develop in Elixir.

Elixir’s interactive shell or IEx is a REPL – (Read Evaluate Print Loop) where you can explore Elixir. You can input expressions there and they will be evaluated giving you immediate feedback. Keep in mind that your code is truly evaluated and not compiled, so make sure not to run profiling nor benchmarks in the shell.

The Break Command

There’s an important thing you need to know before you start the IEx RELP – how to exit it.

You’re probably used to hitting


to close the programs running in the terminal. If you hit


in the IEx RELP, you will open up the Break Menu. Once in the break menu, you can hit


again to quit the shell as well as pressing



I’m not going to dive into the break menu functions. But, let’s see a few IEx helpers!


IEx provides a bunch of helpers, in order to list all of them type:



And this is what you should see:

Those are some of my favorites, I think they will be yours as well.

  • 1

    as we just saw, this function will print the helper message.

  • 1

    which is the same function, but now it expects one argument.

For instance, whenever you want to see the documentation of the


method you can easily do:

Probably the second most useful IEx helper you’re going to use while programming in Elixir is the


, which compiles a given elixir file (or a list) and expects as a second parameter a path to write the compiled files to.

Let’s say you are working in one of the http://exercism.io/ Elixir exersices, the Anagram exercise.

So you have implemented the


module, which has the method


in the anagram.exs file. As the good developer you are, you have written a few specs to make sure everything works as expected as well.

This is how your current directory looks:

Now, in order to run your tests against the Anagram module you need to run/compile the tests.

As you just saw, in order to compile a file, simply invoke the


executable passing as argument path to the file you want to compile.

Now let’s say you want to run the IEx REPL with the Anagram module accessible in the session context. There are two commonly used options. The first is you can require the file by using the options


, something like

iex -r anagram.exs

. The second one, you can compile right from the IEx session.

Simple, just like that!

Ok, what about if you want to recompile a module? Should you exit the IEx, run it again and compile the file again? Nope! If you have a good memory, you will remember that when we listed all the helpers available in the IEx RELP, we saw something about a recompile command. Let’s see how it works.

Notice that this time, you passed as an argument the module itself and not the file path.

As we saw, IEx has a bunch of other useful helpers that will help you learn and understand better how an Elixir program works.

Basics of Elixir Types


There are two types of numbers. Arbitrary sized integers and floating points numbers.


Integers can be written in the decimal base, hexadecimal, octal and binary.

As in Ruby, you can use underscore to separate groups of three digits when writing large numbers. For instance you could right a hundred million like this:









Floare are IEEE 754 double precision. They have 16 digits of accuracy and a maximum exponent of around 10308.

Floats are written using a decimal point. There must be at least one digit before and after the point. You can also append a trailing exponent. For instance 1.0, 0.3141589e1, and 314159.0-e.


Atoms are constants that represent names. They are immutable values. You write an atom with a leading colon


and a sequence of letters, digits, underscores, and at signs


. You can also write them with a leading colon


and an arbitrary sequence of characters enclosed by quotes.

Atoms are a very powerful tool, they are used to reference erlang functions as well as keys and Elixir methods.

Here are a few valid atoms.

:name, :first_name, :"last name",  :===, :is_it_@_question?


Of course, booleans are true and false values. But the nice thing about them is at the end of the day, they’re just atoms.


By default, strings in Elixir are UTF-8 compliant. To use them you can have an arbitrary number of characters enclosed by




. You can also have interpolated expressions inside the string as well as escaped characters.

Be aware that single quoted strings are actually a list of binaries.

Anonymous Functions

As a functional language, Elixir has anonymous functions as a basic type. A simple way to write a function is

fn (argument_list) -> body end

. But a function can have multiple bodies with multiple argument lists, guard clauses, and so on.

Dave Thomas, in the Programming Elixir book, suggests we think of fn…end as being the quotes that surround a string literal, where instead of returning a string value we are returning a function.


Tuple is an immutable indexed array. They are fast to return its size and slow to append new values due its immutable nature. When updating a tuple, you are actually creating a whole new copy of the tuple self.

Tuples are very often used as the return value of an array. While coding in Elixir you will very often see this,

{:ok, something_else_here}


Here’s how we write a tuple:



Pattern Matching

I won’t be able to explain everything you need to know about Pattern Matching, however what you are about to read covers a lot of what you need to know to get started.

Elixir uses


as a match operator. To understand this, we kind of need to unlearn what we know about


in other traditional languages. In traditional languages the equals operator is for assignment. In Elixir, the equals operators is for pattern matching.

So, that’s the way it works values in the left hand side. If they are variables they are bound to the right hand side, if they are not variables elixir tries to match them with the right hand side.

Pin Operator

Elixir provides a way to always force pattern matching against the variable in the left hand side, the pin operator.


In Elixir, Lists look like arrays as we know it from other languages but they are not. Lists are linked structures which consist of a head and a tail.

Keyword Lists

Keyword Lists are a list of Tuple pairs.

You simply write them as lists. For instance: [{:one, 1}, 2, {:three, 3}]. There’s a shortcut for defining lists, here’s how it looks: [one: 1, three: 3].

In order to retrieve an item from a keyword list you can either use:

Keyword.get([{:one, 1}, 2, {:three, 3}], :one)

Or use the shortcut:

[{:one, 1}, 2, {:three, 3}][:one]

Because keyword lists are slow when retrieving a value, in it is an expensive operation, so if you are storing data that needs fast access you should use a Map.


Maps are an efficient collection of key/value pairs. The key can have any value you want as a key, but usually should be the same type. Different from keyword lists, Maps allow only one entry for a given key. They are efficient as they grow and they can be used in the Elixir pattern matching in general use maps when you need an associative array.

This article originally appeared on Toptal

Celebrating 25 Years of Linux Kernel Development

Linux is now 25 years old, but it’s no hipster. It’s not chasing around Pokemon, and it’s not moving back in with its parents due to crippling student debt. In fact, Linux is still growing and evolving, but the core ideas of the Linux State of Mind remain the same.

You see, Linux is much more than an operating system, it’s a mindset. Even if you don’t agree with its philosophy, you can’t afford to ignore it.

That’s why we decided to pay homage to this iconic operating system and the ever-growing community of developers who keep it going.

25 years of Linux: Honoring the great penguin coup

25 years of Linux: Honoring the great penguin coup

To mark the occasion, the Linux Foundation recently published the seventh edition of its Linux Kernel Development Report, which offers a detailed recap of all the work done over the past couple of decades. The adoption of Git, 10 years ago, made tracking easier (not that we’re looking for exact numbers here). It’s estimated that more than 14,000 developers have invested time and effort in Linux kernel development since 2005. This army of talent comes from more than 1,300 companies, and the report lists a number of industry heavyweights as the main sponsors of Linux kernel development: Intel, Samsung, Red Hat, AMD, Google, ARM, Texas Instruments and more.

While it’s the epitome of open-source, Linux kernel development is not a hobby. Not anymore. So, as we wish Linux a happy birthday, let’s take a quick look at some kernel development highlights:

  • 25 years of development
  • Contributions from 14,000 developers since 2005
  • 5,000 new developers joined the effort in the past 30 months
  • ~22 million lines of code currently constitute the Linux Kernel
  • More than 4,500 lines of new code added each day
  • Development is speeding up

Linux State of Mind

When it was first released in August 1991, few could have imagined the long-term impact of Linus Torvalds’ open-source OS on the software industry. At the time, the tech landscape was dominated by a handful of big players, the likes of Microsoft, Apple, and IBM. The nineties were an era of rapid technological progress, and new technologies – most notably the Internet – made remote, distributed development a possibility.

Developers halfway around the globe could finally collaborate on immensely complex software projects. It goes without saying that Toptal, and indeed every freelancer, owes a debt of gratitude to Linux pioneers who validated the concept of remote software development in an era of dial-up internet. They made it work, without Git, Skype, broadband, and a bunch of other technologies and tools we take for granted today. In fact, most of these tools were in part made possible by Linux-based servers and many are open-source.

But what drove the industry to adopt Linux? Well, to put it bluntly, the simple fact of not being Microsoft was a big part of it. A lot of UNIX people just had an issue with proprietary operating systems and wanted an open-source alternative. Diehards couldn’t reconcile with the fact that mainstream operating systems were a proprietary walled garden. Their vision was to create an open-source alternative, something that everyone could use free of charge, something they could modify and redistribute at will.

Idealism and business rarely cross paths, but when they do, we often end up with novel ideas backed by passionate proponents and criticized by equally passionate detractors. The idea of an open-source software ecosystem is as powerful today as it was in the early nineties, and with a quarter century of Linux development behind us, we can get a better idea of its profound impact on industry.

Open-Sourcing and Democratising The Internet

But wait, most of us are reading this on non-Linux systems: Windows and Mac rigs, smartphones and tablets running UNIX-like operating systems, so why aren’t we on Linux systems? Well, we are, at least sort of. How many LAMP servers sprung into action today, to serve you your daily dose of emails, social feed updates, useless ads and (mis)information?

Personally, I think this is the biggest contribution to mankind made by the Linux community: Linux-based servers helped our industry take off and legitimized the open-source concept.

It was no longer about UNIX enthusiasts trying to create an open-source alternative to fight The Empire; Linux took on big brands on their home turf and emerged victorious. The concept was vindicated and mainstreamed, proving once and for all that open-source isn’t just a heartwarming notion; It’s good for business.

What did we get out of it?

Linux helped lower the bar for developers and entrepreneurs entering the industry. Successful Linux distros grabbed a sizeable market share in the hosting industry, generating pressure on competing platforms. In this war of attrition, Linux servers prevailed thanks to a number of factors. In the end, they came to dominate many market segments. Today, anyone can get a reasonably powerful hosting plan for peanuts, and if they’re looking for the cheapest possible solution, they’re bound to end up with a flavor of Linux. The rest of the stack is usually as free and open as Linux itself.

That’s what our side of the industry got out of Linux: The ability to quickly deploy products on low-cost, open-source infrastructure.

How many pet projects, started on the cheap, turned into multi-billion enterprises? How many would have failed had it not been for Linux?

Where’s the Money Linuxowski?

A common misconception about Linux development is that it’s handled solely by enthusiasts and that it’s not a niche for people looking to cash in. While Linux is a labor of love, it’s also big business in its own way.

As I highlighted earlier, development is speeding up, and more Linux developers from more companies are choosing to contribute. They’re not simply choosing to set aside their precious time because they are good Linux folk; the latest report states that the number of unpaid developers working on the kernel has dropped to 7.7 percent, dipping into single-digit territory for the first time.

While some might not agree, I see this is a very positive trend. Enthusiasm doesn’t pay bills, and it’s hard to keep any project going on enthusiasm alone for more than a few years, let alone a gargantuan project like Linux that came into being a generation ago.

It doesn’t end there. According to numerous surveys, demand for Linux talent remains robust, and is actually increasing, and so is the Linux server market share. A few years ago, it would have been much easier to tally up the number of shipped servers, motherboards, and other hardware, and figure out the number of Linux boxes in the wild.

This is no longer the case.

Linux in The Cloud

A dark Cloud came along and made this process more difficult, much to the dismay of analysts. When your job is to look at numbers and market trends, any lack of data or ambiguity is bad for business, and for a while analysts expressed concerns about the future of Linux in the post-cloud era. These concerns made a lot of sense (and, to some extent, still do) because the cloud ecosystem was an oligopoly from the get-go, dominated by the Amazons and Googles of the world.

Does the Cloud spell doom for cheap Linux servers and is there a silver lining?

The Cloud did not kill off small Linux servers, but it hasn’t been kind to them either:

  • At one end of the spectrum, you’ll find people who believe the cloud will transform the server market, and through consolidation, will forever change the hosting industry. This economy of scale argument is tempting because it’s logical to assume cloud industry leaders will offer superior pricing by virtue of their size. You don’t get sweetheart hardware deals if you have a small, regional datacenter and need a couple of hundred fresh boxes every year; you get them if you have a massive cloud infrastructure and need dozens of new servers on a weekly basis. However, I find this argument overly simplistic.
  • The opposing camp espouses equally simplistic views, but it tends to be more optimistic. A lot of Linux veterans have high hopes for cloud development; they believe CloudStack and OpenStack will help turn the tide, and they think there will always be room for smaller players.

As usual, the truth is somewhere in the middle, but let’s not weigh in on this; it’s beyond the scope of this article. Suffice it to say that both options could work for Linux in the long run. Even if the hosting industry is forever transformed and consolidated, that doesn’t mean demand for Linux talent will evaporate. On the contrary, it’s likely to increase regardless of what happens, although demand will evolve to meet new requirements.

The Next 25 Years

What do the next 25 years have in store for Linux?

What do the next 25 years have in store for Linux?

It’s hard to say, but I have a feeling Linux isn’t going anywhere, at least not in the foreseeable future:

  • The server industry is evolving, but it’s been doing so forever. Linux has a habit of seizing server market share, although the cloud could transform the industry in ways we’re just beginning to realize. Either way, Linux servers aren’t going anywhere just yet.
  • Linux still has a relatively low market share in consumer markets, dwarfed by Windows and OS X. This will not change anytime soon.
  • Linux does not have a significant share in mobile, although Android currently dominates this space. Mobile is becoming an Android/iOS duopoly. It’s oversaturated; there are too many software and hardware platforms out there, so it’s doubtful Linux will ever take off in this market.
  • Gaming is a potentially huge, untapped market for Linux. This market is dominated by Windows in the desktop segment, proprietary operating systems in the console space, and Android and iOS in mobile. Valve’s SteamOS is the latest attempt to get Linux on gaming rigs, and it’s a promising concept. Unfortunately, demand for Steam Machines has been soft and Linux still has a negligible market share in the gaming industry.
  • Emerging segments include the Internet of Things (IoT), wearables, smart home devices, and more. Due to its open-source nature and the potential for a very small OS footprint, Linux-based operating systems could find their way into a range of connected devices, from our homes and cars to our places of business.
  • High-performance computing has a good chance of becoming a Linux-only space. Linux has practically replaced UNIX and other operating systems in current-generation supercomputers.

It’s hard to make Linux-related predictions due to the nature of the OS and the Linux community. Evolution doesn’t necessarily have to be a straight line, and Linux developers have proven this time and again. Linux could morph into something completely different over the next couple of decades and become the OS of choice for various products and services we can’t even imagine today.

Source: Toptal