Software as an Iterative Process

The world can broadly be divided into the physical world and the digital world.

In the physical world, products are manufactured and deployed. Those products never again see the manufacturer. If there is a problem, a new process may be implemented to solve the problem; but customers with the existing product will not likely see the benefit of the new process.

In the digital world, products are created and deployed just like in the physical world. However, everything that follows is different. When software problems are identified, patches are created or new versions are deployed. When the customer accesses the patches or upgrades their software, they see the benefit of the new changes.

This difference allows for a vastly different approach to creating and deploying products in the digital world. Unlike physical products, software can benefit from an iterative process. Software can be modified today, tested tomorrow, and deployed the following day. What if it doesn’t work? The changes can be rolled back or a new patch can be deployed. Unlike a physical product, a digital product is never complete.

This huge paradigm difference means that software companies can be far more nimble than manufacturing companies. Since changes can be made at any time, software companies enjoy far less risk than manufacturing companies.

As a developer of custom software, I often work with customers that are uncomfortable with this process. These customers want to wait until all functionality is present or until everything is fully polished and tested. This makes sense in the physical world, but can actually be detrimental in the software world.

Why is it important to use an iterative approach to software development? Since deploying software can be done quickly and changes can be rapidly fixed if necessary, frequent software releases allow customers to receive updates more quickly than if they waited for a major release. This allows customers to provide feedback if the features are not useful or if they work improperly. This feedback creates a loop that allows developers to go back and ‘get it right’ if they need to in a more controlled environment. Without this loop, it’s possible to spend months developing features that don’t actually meet user requirements or to create bugs that require substantial cost to find and rectify. This means wasted development time as well as missed opportunity costs. Additionally, small frequent software updates means that each individual update can be tested independent of other changes and issues are kept much smaller than in large updates. Frequent software updates also means a decreased time-to-market with new features.

Iterative approaches to software improve user experience, improve time-to-market, decrease difficult to find bugs, and shorten and simplify test cycles. Are you frequently releasing software updates, or are you treating software like it’s a physical commodity?

Read the Fine Print

Yesterday, I was contacted by a customer who wanted to add push notifications to their mobile application. This is a common desire from customers, but they generally don’t have the infrastructure to support push notifications. Push notifications require a server to generate the notifications as well as some type of software to allow the user to create and send those notifications. Without any server infrastructure, smaller businesses are left without the ability to implement push notifications. Of course, there are a variety of services available for users to overcome this problem. One such service is OneSignal. OneSignal allows for very rapid implementation of push notifications using their servers and infrastructure. A developer can have everything setup and ready to demo in less than 15 minutes. Before suggesting this solution to the customer, I wanted to see pricing options and terms of service. But when I looked for pricing, I found it was completely free – there were no pay options. That sounded great, but I knew there had to be a catch – after all, businesses have to make money somewhere! As I read the Terms of Service, I was shocked to see:

Licensee acknowledges and agrees that the SDK enables Licensee to collect certain information from end users (“End Users”) of the SDK’s functionality (collectively, “SDK Information”), which generally helps provide developers with functionality to target and personalize the notifications they send to end users. This data collected includes: End Users’ mobile advertising identifiers, such as Apple IDFAs and Android Advertising identifiers; End Users’ email addresses End Users’ IP address, device push token, precise location (e.g., GPS-level) data, network information, language, time zone, product preferences, and privacy preferences.

So, this free service is collecting just about every piece of information possible about the user. While this may be ok for some apps, for many apps this would constitute a pretty substantial privacy invasion. Now, for my application, I would have to craft my own Terms of Service and ensure the user was aware that I was collecting the above information. In the tech community, do we read the terms of service, particularly when they will impact our end users, or do we just ignore them? In this instance, I am very glad I dug further. While I found OneSignal to be an awesome product, the terms of service are simply incompatible with the application I’m working to deploy.

Loosely Coupled Systems

One of the principles of modern engineering is to create loosely coupled systems. But what does that mean and why is it so important?

In the past, it was common to create huge systems that included code for a wide variety of different functions. For example, an eCommerce system may include code for accessing the database, processing credit card payments, and interacting with an inventory management system. While having all the code in one place may sound great, it has some serious drawbacks. For example, maintenance becomes increasingly difficult the larger an application becomes. Additionally, testing must include the entire system, and deployment is an all or nothing deal. Upgrades are also more difficult as the entire system must be upgraded at once and opportunities for regression errors multiply.

Today, systems strive to be modular and loosely coupled. One piece may do credit card processing, another service may provide database access, and still another service may provide email support. While there are more pieces, these pieces can be assembled into a wide variety of configurations across different applications and these modules can be more easily tested. Once the credit card service is deployed, for example, it does not need to be changed or tested again until new features are required or bugs are found. Each piece can use the best technology for the task, and upgrades can happen on a per-service basis.

Currently, these services are often deployed as JSON-based REST services. These types of services are now becoming ubiquitous. A large variety of publicly available services are available for things like weather data, stock quotes, ISO country codes, etc.  This modular approach not only decreases development effort, it improves application stability as well.

On any project you’re involved in, whether it’s in writing the code or managing the project, modularity and loose coupling should be one of the most important guiding principles.

Getting Started with Artificial Intelligence

It seems that artificial intelligence is in the news more and more. Most larger companies use AI for something within their business, and more and more businesses are finding ways to improve their organization with AI. Purchase recommendation systems, self driving cars, video games, language translation apps, and route mapping software are just a few examples of artificial intelligence we see every day.

But where does someone interested in AI get started?  AI can be very complicated, but that doesn’t mean you can’t get involved too! One of my favorite resources for learning AI is the series Artificial Intelligence for Humans by Jeff Heaton. In his books, he covers a variety of topics including genetic algorithms, machine learning, clustering, linear regression, swarm algorithms, and so much more. While these topics can be complicated, Jeff presents them without all the math in a way that is far more readable than most texts on the topic.

If you are interested in AI, one of the first questions to answer is what programming language you want to use. While any language can be used for AI, the bulk of the tools and frameworks available exist for Java, C++, Python, and R. If you’re big into number crunching and Big Data, R may be the obvious choice. If you’re not a programmer, Python may be easier. For existing developers, Java or C++ may be best.

What type of AI do you start with? Try genetic algorithms or swarming algorithms. Genetic AI assumes the answer to be like a genetic genome and, through a series of mutations and genetic splicing, attempts to find an answer.  Swarming algorithms look at groups of objects and attempts to have them behave like a cohesive team. Swarming algorithms are great for games with AI controlled enemy armies and are commonly used by game developers. Other common, and simple, AI algorithms include K-Means clustering (used to group objects by similarity),  linear regression (used to predict unknown values using relatively simple algebra), or path finding algorithms such as Dijkstra’s Algorithm.

Once you understand the basics of AI, you can move onto frameworks like DeepLearning4J or TensorFlow to help create Neural Networks (a far more advanced type of AI) or look into libraries like OpenCV for tinkering with Computer Vision.

Whether it’s creating an emulation of a fish tank using swarming algorithms, solving the Traveling Salesman Problem using genetic algorithms, or calculating a path through a maze using Dijkstra’s Algorithm, artificial intelligence is loads of fun.

Solidity

Recently, I was contacted by a company that does cryptocurrency to write smart contracts. I’ve never done this before, but I have no problem learning a new language – particularly for a blockchain technology. So what is a smart contract? Smart contracts are an amazing technology that allows you to run code on the blockchain.  Code is written in a language called Solidity, which is similar to Java or JavaScript. Once the code is written, it’s deployed to the blockchain where it can be called by others later. It’s similar to a cloud application, except the cloud is the blockchain instead. It’s always available because there are always machines mining cryptocurrency. Many sources are exclaiming how this kind of technology will change the world, and it’s easy to see why. The ability to write contracts in code that will later be triggered will be hugely impactful to banking, insurance, and countless other fields.

So what does Solidity code look line anyway? Here’s a simple Hello World app in solidity:

pragma solidity ^0.4.24;

contract HelloWorld {
    event log_string(bytes32 log);
    
    function() public {
        emit log_string("Hello World!");
    }
}

If you’re interested in learning more about Solidity, check out the MetaMask plugin for Chrome as well as the Remix IDE and find yourself a good online video for Solidity development. I feel confident that the future holds countless opportunities for developers who master this technology!

Minimum Viable Product

An important, but often ignored, concept in the realm of software development is the notion of Minimum Viable Product. Defining a Minimum Viable Product (or MVP) provides several benefits for software development teams and companies. First, let’s define Minimum Viable Product. The MVP is simply the absolute minimum requirements for a working product. For instance, if you’re writing a calculator application for a mobile device, the MVP would simply add, subtract, multiply, and divide numbers. It would not, however, need to perform more complicated functions such as square root, exponents, trig functions, averaging, etc. These features would be added to future versions if necessary.

What makes this idea so important?  When we define the Minimum Viable Product, we can easily accomplish several things. First, when we eliminate all the fluff, we can focus on core functionality and release a product quickly. This reduces time to market and increases revenue generation from the application. Second, creating a functional application more quickly allows decision makers within the organization to rapidly determine if the software meets the necessary requirements that the product was intended to solve. This ability for management to be involved earlier in the process ensures that development resources don’t end up heading down the wrong path. Third, deploying the MVP early ensures that customers and end users can interact and provide feedback early in the process when it’s far easier (and cheaper) to change direction instead of waiting for a months or years long process.

Unfortunately, many organizations insist that every feature is essential to the application which ensures long development cycles and dramatically longer time-to-market. Meanwhile, more agile teams are capturing marketshare before you.

When thinking of a new product, software application, tool, or mobile application, the first objective is to define the MVP so that you can quickly create a prototype, verify functionality, ensure essential functionality is present, and then get your software in the hands of users. Only then do you circle back for round two where you begin to add new features by starting with the most important tasks. This process will not only improve your marketshare, but it will also help ensure that the products and services being created actually bring value instead of spending years on software projects that bring no measurable value.

Work in Progress

When most people think about DevOps, Continuous Integration (CI), or Continuous Deployment (CD) they think about tools like Jenkins or Bamboo. And, indeed, these tools are indispensable for rapidly moving software through the development pipeline. However, an even more rudimentary principle is required to truly enjoy the benefits of any CI/CD environment – limiting work in progress (WIP). Work in progress is the number of items that are currently being developed. The higher the number, the more difficult and time consuming deployments become. And, conversely, the lower of number of features in development, the lower the risk for the deployment and the more rapidly it can be deployed to live systems. Limiting the WIP to a small number – a single new feature or a couple of important bug fixes – ensures that you can get the changes tested and deployed in short order. This goes well with the ideals of agile development – quick sprints where a small number of items are completed – and ensures that focus is kept on the most important items. Once that sprint is complete, a new version can be pushed to end users. This ensures users have want they want, turnaround time is low, and bugs are kept to a minimum as changes are small and easily tested. This principle applies well beyond software. In fact, it was originally documented on factory floors. Creating products in small batches ensures that if there is a problem, it’s limited to a small batch and that the first shippable products are available more quickly than large batch processing. How does your company handle work in progress? A small number of items, or huge batches? Does it take forever to release a new version or to have a product ready to market? Consider the impact of limiting the work in progress to a small number and frequently releasing your software or product and how it could impact your time-to-market and ultimately increase revenue!

Requirements Analysis

One of the most important skills for a developer is the ability to analyze requirements and determine the most appropriate solution. For many developers, this means they will determine the best suite of tools that fits within their preferred development environment. For example, a Java developer may decide if JDBC or JPA is a better option for connecting to their database. Note, it was already assumed that the application would be written in Java – other options like C# or PHP were ignored because the developer making the choice was a Java developer. The problem is that there may be better options depending on the requirements of the project. For example, I am currently working on a mobile project that uses a JavaScript framework. One of the requirements of the app is to create a  fairly detailed PDF document. This document was created using pdfmake – an excellent toolkit for making PDF documents based on a simple configuration file. However, as the project grew, it was requested to have a web service written that would be given the configuration, generate the PDF, and send an email. What server language would I use? I could use Java – but do I want to rewrite the entire PDF generator in a new language? Absolutely not. What else could work? In the end, I opted to use a Node.js solution. Why? Because it would be utterly trivial to use the existing JavaScript PDF code within a Node application. In fact, I managed to write all the necessary functionality in a single file with less than 100 lines of code. Had I selected Java, it would have easily grown into several dozen file, hundreds of lines of code, and substantially more billable hours.

Unfortunately, technology decisions are made every single day by organizations that insist on a language or framework before the requirements are even known. Better options may exist, but lack of knowledge of competing technologies prevents their selection. In the end, projects take longer to develop, cost more, and become increasingly difficult to maintain. Certainly no developer can be an expert in every technology, but any more senior developer should be able to provide a variety of competing solutions to any problem as well as indicate the pros and cons of each. When the best technology is selected, projects come in ahead of schedule and under budget. Time-to-market is decreased, maintenance costs are minimized, and – in the end – the organization benefits.

What is a Maker?

Welder

Last week, I went to lunch with a business colleague. As we were talking, I mentioned that I was in the midst of writing a book targeting makers. “What’s a maker?” He asked. After pondering the question for a few seconds, I realized it was actually a really good question! What is a maker? I could say that it’s people who tinker with hardware platforms such as Arduino or Raspberry Pi, but that’s a rather narrow definition. What about the man who builds an aquaponics system? Is he a maker too? Or the girl who knits hats, is she a maker? Does her knit hat need to include electronics for inclusion in the realm of makers? Placing a definition on ‘maker’ is actually harder than it looks. However, if you subscribe to Make Magazine or ever visit a Makerfaire, you will learn pretty quick that the term “maker” is rather broad. A few years ago, I saw makers blowing glass, forging swords, and knitting blankets – hardly ‘tech savvy’ projects, but still makers. Of course, I saw countless tech projects such as Raspberry Pi clusters and IoT devices too as well as jewelry makers using circuit boards, artists drawing robots, and things made of Legos. So what is a maker? The best answer I can come up with is that a maker is someone who uses the tools and materials around him or her to make something useful or even just novel. It’s being part of a movement that empowers people to solve problems on their own. Makers are the people you want to be with when the world ends – because they’ll have the tools and knowledge to rebuild society. Makers are the people who caused the renaissance – great minds like Leonardo DeVinci. Makers are the jack-of-all-trades men and women who can program a micro controller, 3D print a case, and use it for the robot they cut, welded, and painted themselves. As a business, why should you care? What difference does it make to you? Makers are the people in your organization that will solve the problems to move your business to the next level. They are the men and women who poke something to see how it works and how they can improve it. They are the problem solvers you want on every team in your organization because they are the thinkers that will create the great things of tomorrow!

What is Refactoring?

To me, refactoring is one of the most important parts of the software development lifecycle. Most developers are familiar with the idea of refactoring, but customers and managers may not be.  So, what is refactoring and what value does it bring to the development process? Refactoring is the process of going through code and redesigning, updating, and fixing with the intent of improving the code. This can encompass a lot of different improvements. For example, a developer may find that certain blocks of code are repeated over and over again. Repeated code can cause all kinds of problems. At the very least it increases the size of the application needlessly, but on the more problematic size, it also decreases maintainability of the application. For example, if a code block is repeated 8 times, and the required logic for that code must change, the code will need to be replaced in 8 different places. Chances are good that developers will only find 7 and you’ll spend months trying to figure out where the problem is. Other common tasks involve redesigning to make the application cleaner, removing unnecessary code, and all sorts of things to generally improve the codebase. Why is this important, and what is the benefit to the organization? Without refactoring, code tends to become messy. Each new developer adds something new, code is duplicated, paradigms change, data models are updated, technologies are improved, etc. As these things happen, the application becomes increasingly difficult for developers to follow. Additionally, bugs increase, the size of the code increase, and things run less than optimally. In the end, refactoring is kind of like getting a tuneup on your car. As a good developer, I am always looking for ways to improve the code so that future developers will have an easier time maintaining the application. It’s incredibly important and should be a priority not only for developers but for management and customers as well.