Developer Maturity

Today, I was onsite at a customer location. This particular customer has an interest in programming, and showed me a site he was checking out that asked a variety of questions to determine what language he should learn. Do you want to write games, web apps, desktop apps, etc. For nearly every sequence of answers, I could predict what language they would pick. Why? Because I have used countless languages over the years and know where each one excels. Another customer recently asked me to create a very trivial service that would consume JSON messages sent from a server. You could write a Java app for this, but why? My suggestion was either a simple PHP script or a Python app running a REST service with Flask. Either option is far simpler than writing Java code to handle the processing. One of the key indicators of a developer’s maturity is their ability to pick the proper technology for the problem at hand. For the customer, the technologies selected can make a huge difference not only in the price of the software, but in it’s success as well. This extends beyond programming languages. Mature developers understand build systems, revision control systems, project management methods, design patterns, and so much more. For customers, picking an inexperienced developer can mean substantially higher development cost, increased long-term cost of ownership, increased development times, and numerous other problems. An experienced developer can help you navigate the technology map and select the best options in order to provide the best user experience at the best cost.

(Fr)agile Development

Agile development seems to be the ‘in’ thing for software development firms today. Everyone has their daily standup meeting, scrum masters, sprints, and so forth. But does it really bring any true value to software development? I don’t think so. In fact, I have taken to calling it fragile development. Why would I say that about such a beloved idea? Because my experience is that it creates horrible software. Sprint after sprint, developers work to output code often without knowing the big picture. Day after day, short-sighted development goals are pushed so you get enough points done and complete everything for the sprint. Managers now have even more ability to micromanage since everything has to be done at the end of the sprint and everyone has to report in with their daily progress. Time is wasted in meeting after meeting that brings little if any actual value to the team. What does all this mean? It means that developers are always under the gun to output code quickly so it’s done for this sprint. This causes developers to take the short path instead of analyzing things more thoroughly since such analysis would take more than the allotted points and may push things off schedule. Developers are never able to try new things because failure is not an option — points must get done and software must be pushed. Software engineers never have an opportunity to practice engineering skills or architect larger systems because everything is broken down into bite-sized pieces without any real worry about how it all fits together. To make it worse, important technological changes and refactoring get pushed to the bottom of the list since they do not have an immediately visible benefit for the organization. In the end, developers never grow or develop, code stagnates, and poor software becomes the norm. Sprint after sprint turns into a death march for features, bugs never get fixed (unless they bring immediate business value), and the expertise of senior developers is ignored in favor of short-sighted business objectives. If you want quality software, leave engineers do what they do, give them the latitude to make the necessary decisions to create the product you want. Qualified engineers are more than capable of creating awesome software — why hinder them?

The micro:bit

MicroBit

Yesterday, I had the pleasure of playing around with a piece of technology called the micro:bit. The micro:bit is a small hardware platform intended primarily for introducing young people to programming and tinkering. It’s a small device, not quite 2 inches square, but it is amazing. In fact, I’d say it’s probably one of the coolest pieces of technology I’ve played with in a while. What makes it so awesome? First, it runs Python. While I’m not primarily a python programmer, I do recognize that it has become one of the most widely used languages — not only for tinkerers, but also for professional development. This makes the micro:bit far more accessible to young people than C++ used by the Arduino controller (NOTE: I love Arduino, but the micro:bit is certainly more ‘kid-friendly’) The second thing I love about the micro:bit is that I had to install exactly nothing on my computer to get it working — no IDE, no flashing utility, nothing at all. The IDE is online, so you can type in your favorite browser and download the .hex file when you’re done. Flashing the device could not possibly be easier. When you plug the device in via a standard USB phone cable, it’s recognized as a USB thumb drive. Simply copy the .hex file to this virtual drive and within moments your device is flashed and you can see the results of your work. If the above weren’t already enough to give the micro:bit 5 stars, there’s more. The micro:bit includes an impressive hardware selection and associated API. There are 2 buttons, a 5×5 LED grid, accelerometer, magnetometer, and bluetooth. The form factor includes pads big enough to use alligator clips for interfacing with external hardware too — so you don’t even have to solder anything. While I’ve only tinkered with this device for a short time, it is definitely one of the more impressive hackable hardware pieces I’ve seen in quite some time. This needs to be in the hands of every elementary and middle school student interested in programming or hardware. The micro:bit is, in my opinion, a real game-changer.

“Time Saving” Frameworks

Watch

Over the course of my career, I have seen all kinds of advancements in programming — some of them good, others bad. Today, there seems to be a trend to use all kinds of frameworks, design patterns, and technologies in an effort to save time. Unfortunately, what seems to save time in the short term often causes trouble long term. I must start out by saying that I’m not suggesting that all frameworks and technologies are a bad thing. For example, using an MVC approach for developing web applications is an excellent idea as responsibilities are cleanly separated between the HTML, the business logic, and the data access code. This works very well and improves maintainability of code. However, other frameworks are less valuable to me. For example, the popular lombok package allows Java developers to create data classes without manually creating the getters and setters for fields. This sounds like a great time saver as you can avoid writing code! But here’s the problem — it’s often useful to be able to find where variables are set, and you simply can’t do that if the setter doesn’t exist until compile time. Furthermore, this package saves little time since Eclipse has the option to generate (real) getters and setters already. Why is this package needed? I don’t believe it brings any value — just another technology for the sake of technology. In Android code, I see all kinds of frameworks being used. Unfortunately, most of them only obfuscate the code. While there is value in the MVP model, many Android activities are too small to really gain much from this model and the excessive number of interfaces begins to be a drag on development. Speaking of interfaces, they are probably the most overused design pattern. Again, I have to start out by saying that I absolutely love interfaces. Properly used they are one of the most valuable features of Java. Unfortunately, when everything becomes an interface and you need dependency injection to create the objects things start to complicated — often without any real benefit. Interfaces allow us to plan for the future — to admit up front that other implementations may be needed later. But in many cases this is simply never going to happen. Database code should certainly be written using interfaces wherever possible because it is not only reasonable but indeed likely that the project will — at some point during its lifetime — use a different database. Other things may make sense for interfaces too such as code for credit card processing, sending messages, etc. Ask yourself — is there a reasonable expectation that we will have multiple ways to do this? If the answer is no, why are you using an interface? Again, I have no problem with these technologies. But for me, my first choice will always be to use the simplest stock Java code I can and only move on to more complex frameworks when necessary. My code footprint is smaller, code is easier to maintain, and new developers can more easily move into developing or supporting frameworks with fewer complex patterns and technologies.

What Does Expert Mean?

Knowledge

I started tinkering with computer programming twenty years ago. It started pretty simple with learning HTML, then I moved on to learn C, then C++, and Java. Along the way, I picked up numerous other languages. For the last 17 years, writing software has been my profession. I consider myself an expert programmer, particularly in Java. But what does expert mean? I see resumes of people who have 3 or 4 years of experience and claim to be an expert in Java. Is that possible? Unfortunately, I think the IT world has cheapened the idea of being an expert to a point where it is meaningless. In the competitive market for software developers, individuals market themselves as an expert so they can acquire the most lucrative job opportunities. Does this really hurt anyone? Yes, it really does. Companies have self-proclaimed experts write their software. Then, when that worker’s contract is done, or another more exciting project comes up, that expert jumps ship. Now, the company is left to maintain that code. And this is usually the point where the business finds out that the ‘expert’ they hired wasn’t an expert at all. Expertise in computer programming means an ability to analyze the problem and come up with a solution that is maintainable and stable. But all too often, we find ‘experts’ writing code that nobody wants to maintain because it is convoluted, poorly documented, fails to comply with style and naming conventions, and buggy. We find database schemas that are bloated and make no sense. We see poor project hierarchy and little object decomposition. In short, we see garbage code. When you hire a programmer, don’t let them fool you — if they’ve only been writing code for a few years, they’re not an expert.

The Importance of Engineering

Battleship

During my career, I have encountered projects that were well written and those that were not. Often times, the poorly written projects were farmed out to development firms in under-developed parts of the world such as India or Pakistan. What is it that makes code poorly written? For me, the most important aspect of well written code is maintainability. No other metric is as easily correlated to long-term revenue and cost as maintainability. Code written today should be able to be quickly modified in the future when bugs are found, when requirements change, or when new features are desired. How can this be accomplished? First, projects need to be logically laid out. Code packaging schemes are necessary to easily navigate large projects. Throwing everything in a single folder will ensure that future developers have a more difficult time finding code they are looking for. Second, files and classes must be named logically — otherwise, every file becomes ‘mystery meat’ that is only identifiable when it’s opened. Third, classes should broken up appropriately. I have worked with projects were there was only a single data access object for the entire schema. Monolithic files such as this are substantially more difficult to navigate, and they can also cause problems with source control if multiple developers are trying to edit different parts of the file. Fourth, consistent style should be followed and whitespace should be cleaned up. Following code with different styles, or with 10 lines of whitespace in the middle is more difficult. Imaging reading a paragraph in a book where the author decided to add 10 extra blank lines — code is no different, it is intended to be read. Fifth, properly breaking out functions is imperative. Too often I have seen code were, for example, the code to send an email was entirely replicated over and over again instead of placed in a single function. When email standards change (for example, changes to security) or when the tools you use to send email change (such as switching from SMTP to a REST based service such as MailGun), you can be sure that the time taken with a poorly engineered codebase will be longer and that you are more likely to have a greater number of bugs. These are some of the biggest issues I typically encounter in project after project and they cause a significant problem to businesses. Unfortunately, the customer is often unaware of how poorly the code is written. When they hire someone else to maintain the code, they wonder why it takes so long to update or why there are so many bugs. They falsely believe the new developer is less competent and attribute the problems to him or her. Long term, time-to-market suffers, customer confidence is diminished from bugs, and business opportunities are missed because of wasted time. When you hire developers — particularly when they are outsourced — it is imperative to ensure that they are writing good code. Ask to see code samples before hiring, perform frequent code-reviews with them, and specify coding standards they must follow. If you don’t, the money you save by using offshore resources will be lost during the course of maintaining the application.

Build Automation with iOS

I am a big fan of build automation. As a developer, the last thing I want to waste my time doing is compiling and distributing my software. Countless technologies are available that can play a part in automating your build process. Tools like Make, Ant, Maven, and Gradle can define the steps required to build the software. JUnit can be used for unit testing, and plugins like Findbugs and Checkstyle can be used to check code quality. Then, HockeyApp can be used to distribute apps to test users (or even end users). All of these processes can be executed automatically by a tool like Jenkins. Jenkins is an amazingly powerful tool that pulls all the above together in a nice web-interface with countless plugins available to accomplish just about any development task. So long as your project can be build using command line tools, Jenkins can build it for you. Unfortunately, there is one exception that really sticks out — iOS. Sure, you can build an iOS application using Jenkins, but you will find that you spend nearly as much time managing a failed build process as you would spend manually building and deploying the app yourself. I have dozens and dozens of tasks that I have automated with Jenkins – Android builds, JBoss applications, C apps, scripts, REST calls, and iOS builds. The only processes that ever requires my attention are the iOS builds. Why is iOS build automation so painful? Well, once you try it you realize very quickly that Apple does not intend for you to use anything but the Xcode environment for development. Switching profiles, certificates, etc require all kinds of ruby scripts and a detailed knowledge of the iOS build process. There is no simple ‘make’ command you can run to build the application. Certificates cause all kind of problems too. New developer on the team? Great – plan on the build breaking when he checks in the project with a new certificate you don’t have on the build server. New hardware to test against? Don’t plan on using any of the existing builds — they won’t work until you add the device to your profile and install new certificates. Time to pay your annual fee as an Apple developer? Be ready to install new certificates on Jenkins. Sure you can try to automate some of the certificate mayhem, but plan to enter all of your Apple developer passwords, keychain passwords, and any other password or key you can imagine into Jenkins to try to accomplish the build. Then, when it comes time to deploy to quality assurance staff or test users in your organization, plan for a whole new round of fun. Apple really doesn’t want you using anything but their TestFlight tools. I could go on, but you get the point — iOS + build automation = pain. I love Apple’s line of laptops, which are a dream for developers. I just wish their iOS platform wasn’t such a nightmare to work with.

The Value of Education

Anybody who knows me knows well that I value education. I have studied countless languages, formally trained in both locksmithing and herbal medicine, achieved a third degree black belt in taekwondo, and earned an associates degree in psychology. But what surprises most people is that I don’t have a degree in computer science. In fact, even the degree I do have was earned through a correspondence school less than 10 years ago. I did not go to college out of high school, I joined the army. And, just about everything I know about programming I taught myself. Why does this matter? Well, in today’s society there still seems to be a strong desire for candidates applying for programming positions to have a bachelors degree in computer science. Many job listing require a bachelors at a minimum. The unfortunate thing is that most of the best programmers I have ever encountered did not have a degree in computer science and many had no degree at all. Throughout my career, I have always been identified as among the best when it came time for reviews — so a degree is not necessary for someone to ascend to the top of the class. So what is needed? Programming is an art that is learned through doing — not through formal education. And that is where the problem begins. I have interviewed countless candidates for programming positions with degrees and, sadly, few of them really knew the first thing about programming. They had attended years of college, but couldn’t identify the objects in a problem or design a trivial database to house the corresponding data. Why? Because they had never actually programmed much of anything. Maybe they implemented a stack, a linked list, or a sorting algorithm. And, while an understanding of those things is important, they already exist in the libraries of every language out there. Have they ever written anything more than that? Typically, I hear graduates tell me about one or two projects they worked on. They have a degree, but they’ve only ever written one or two real programs. What’s the value to that? Their piece of paper has come with no actual knowledge or expertise. We seriously need to revamp our education system to focus on real world training and spend less time on the things which bring no value to the business world. If we do not, we will continue to watch computer-related jobs go to foreign firms that are better trained and cheaper than our own fellow Americans.

Writing on the Wall

In the 90’s, an amazing thing happened – Linux was born. This small project has had a profound impact on the world of technology. Not only did it create a Unix clone, it advanced the open source movement by leaps and bounds. During the decade after the creation of Linux, companies, like Microsoft, would argue against the idea of open source and attack Linux. But then, in the early 2000’s, things started to change for the Linux movement from an unlikely source – Apple. The new Mac OS X would use BSD (an operating system very similar to Linux) as the core of the operating system. The Linux world was very excited about this! The change meant that Unix/Linux hackers had support for Unix on a commercial operating system. Now, it seems that developers across the globe are using Macs for development. Why? Because services like Google Cloud and AWS, as well as Docker, use Linux. Mac users can develop cloud applications on their machine to be deployed to Linux servers and have similar environments on both. Where does Windows come into the picture? Microsoft has fought against these technologies, and has even come up with their own competing cloud service – Azure. But, as more and more developers jump ship to Linux or Mac, Windows needs to move. And over the last several years they have. Microsoft’s .NET Core is not only open source, but it also runs cross-platform – an idea that seems almost blasphemous to the Microsoft of a decade ago. And, as of last fall, Windows 10 includes the option to install Bash support on Windows. Indeed, it would appear that Microsoft has seen the writing on the wall and is working to change the direction of their company to be more friendly to the Unix world that has secretly been in control of computing since the dawn of technology. But is it too little too late? Can Microsoft lure developers back with a Bash shell? Time will tell. But, as for this Unix user, I’m glad to see the change.