I recently read How 'DevOps' is Killing the Developer, written by Jeff Knupp, which explores another current trend of software developers engaging in or expected to know and fulfill DevOps type roles and responsibilities. Jeff's argument is that all of the responsibilities imposed, other than writing code, on a developer effectively destroy the developer by turning the developer into a 'technology utility player'. This imposition of additional responsibility and necessary expertise, probably on the backs on the brightest and most inclined/motivated, chips away at the real reason these developers ended up in software development- because they loved to code. Secondly, Jeff argues that the cultural movement to augment staff through 'technology utility players' is a business decision in the vein of companies wanting to 'act like a start-up'. And that to be a successful start-up, you need to act like a successful start-up, which means you leverage the smallest amount of people to do all of the jobs needed to be successful.
You purposely act cheap because every successful start-up started in a garage with two people and a lot of coffee.
As a software developer who wrote his first line of code out of curiosity, which quickly turned to enjoyment, I have a different perspective on the expectations and, really, the opportunity for software developers to be in a position to learn and experience 'DevOps'.
I have always been responsible for more than just writing code. Whether it be deployments, database design/changes, data ETL, production maintenance, customer negotiation, or any other not-writing-code event, I've been expected to perform. I've always seen these expectations as opportunities; ways to learn new things that hopefully will provide additional perspective and insight which ultimately makes me a better developer.
Start-ups started it all
Start-ups have, by necessity, needed engineers who could do it all. Mainly because these are all the engineers, or single engineer, they had. Start-ups did not have the budget to hire specialized players who do their one or two specific things really well. This lead to the revolution of the full-stack developer. Some could argue that frameworks like Rails, Sails, or any other 'full-stack' framework proliferated and/or exploited this movement by either leveraging the expected capabilities of today's software engineer, or speeding things up for companies that were just trying to survive. Chicken or egg perhaps?
I do agree that additional responsibilities added on a developer do take away from the developers growth with respect to writing code. On its face, code writing is a brain/muscle memory exercise and constantly evolving your best practices, based on environments and challenges, is critical. Having time to learn new languages and exploring the capabilities of those languages only work to enhance a developers toolbox of capabilities as challenges comes and go. Supporting an environment where a developer has to choose between their own growth in their area of expertise and something else is a something a company or customer should carefully weight. However, this cost, with respect to DevOps type needs, is not without some return for the developer which can positively affect a developers ability.
Lets take a look at a few responsibilities, often delegated to other personnel filling other roles, leveraged by companies and clients on the backs of developers, and how learning those responsibilities does help the developer of today.
Deployment/operations, often fulfilled by a release manager or someone equivalent, is impacted by actions far earlier than deployment. One could argue that decisions made during development can set the course for smooth or complex deployments. Architectural decisions, development decisions, coding styles, you name it- they all can and do play a part in how complex a deployment and time consuming a deployment may be when the product is delivered and changes over the course of its product lifecycle.
As a developer, it's one thing to be told 'the deployment is taking a long time' or 'the deployment seems overly complex and is causing users frustration based on certain upgrades'. These are contrived examples of problems, but had developers been part of the deployment process, some of these problems may have been mitigated earlier in the development process. This may be discovered during a retrospective, but hands on experience with developing the deployment scripts, recipes etc.. can also play a valuable role in reducing problematic deployment architectures.
Although there has been a constant debate, within teams, for decades on who gets to decide the data model, nowadays the decisions have shifted towards the developer. With the advent of ORM-rich frameworks, the database design starts at the developer more often than not. Some would argue this opens the product and team up to unnecessary risk both by imposing on the developer the importance of a well designed data model, as well as imposing on the developer the need to know how to design data models correctly. Also, when there are changes to the model, the developer needs to be anally retentive to maintaining current production data in a safe way, as well as making changes that do not cripple already proven capabilities. But it these specific concerns that developers need to be aware of when writing their applications. The closer you are to the fire, the more knowledge you have and experience you gain to not get burned. If you make decisions in your code that bring your application to a halt, debugging those problems and learning from them provide exponentially important experience above that which you would learn even in a tight team of application developers and database administrators. Of course, database administrators also perform additional jobs such as maintenance of the database server, software, accounts etc.., but with the advent of new services such as AWS, and the ease at which they provide developers with stable and easily maintained environments, that reduces the impact on the developers.
Paying developers to be someone else
Jeff makes a valid argument that if you choose to rely on developers to be everyone else on the team, at the same time, that you turn a "Master of [a development] trade" into just the "Jack of All Trades". He also argues that you end up paying a developer to be themselves only part of the time, while they do the jobs of another person or people. Although I can see the point, you are also paying them to enrich their capabilities as a developer in different ways through experience of different situations. Although being a bunch of different people can thin someone out to the point of being burn out, that can be mitigated by a management process which understands and realizes the diverse nature of responsibilities being asked of its people, and relieving the pressure on them through realistic timelines, expectations, and overt recognition of efforts and results.
It's not for everyone
I will concede that learning things about code through other responsibilities is not for everyone. Seeing the bright side of an annoyance doesn't elliminate the annoyance, but rather helps frame the annoyance in terms of how it is benefiting you while you are not able to do what you want. If you are asked to do one of these 'DevOps' tasks, research of and experience with tools made to make those tasks easier or more effective can trickle down to development. Whether it be experience with your cap scripts helping to reshape the way you design your applications through different abstraction or encapsulation, or figuring out a database design problem helps you realize the implications of full table scans, learning and experiencing these things yourself can help to burn them into your memory as you move forward in your development career. It's incumbent upon team leaders to ensure the right people are asked to do the right things. Not every developer can do it or needs to do it. Choose wisely and listen to your team. Be lean, but first, be smart.
Questions, Comments? @leechris / [email protected]