How I Got Here

During the last decade, I have been asked countless times how I would recommend someone get into the programming world without going to college. Many people don’t want to spend the time or money getting a computer science degree, and since I did it people generally assume I can tell them the shortcut to achieve the same in their life. Unfortunately, few people really realize the amount of time and effort it took for me to get where I am or the difficulties I’ve encountered by not having a degree in a field where a degree is expected.

I didn’t start working with computers until around 1997. I was in the US Army at the time, and worked exclusively with Unix machines. Unlike the Windows world, the Unix world has always had tools for developers. The machines I worked on had C, C++, Perl, Tcl/TK, Fortran, Bourne Shell, and numerous other programming tools. During the course of my work, I often used scripts written by others, and soon learned to modify them to do what I wanted them to do. I purchased programming books from Amazon and decided to learn more. At home, I setup an old computer to run Linux so I could have a similar development environment to the machines at work. As time went on, I decided to take courses through the National Cryptologic School on Perl, Unix Administration, C Programming, and other tech subjects. After becoming pretty comfortable with C, I decided to further my education through a correspondence school (the now defunct National Radio Institute) where I earned a diploma in Visual Basic Programming (as I was already pretty comfortable in C, I figured it best to branch out and learn a new language). A few years later, I took a few courses from the University of Maryland in C++ programming. Throughout this time, I spent countless hours at home learning everything I could about programming. I wrote programs to do all kinds of things – from GUI development to command line scripts. This was a difficult time in my life – working a full time job while spending all my free time learning to write computer software took a heavy toll on my marriage. But it would all be worth it when I got out of the Army and found a job in the software realm.

Then, in 2001, it was finally time to step out and find a programming job. Of course, I had no real experience as a programmer. I had written some code and scripts in the Army, but hardly anything that would really be considered production code. None-the-less, I managed to find a local company that needed an entry level programmer. And, thanks to a friend who knew the owner, I was offered a position. In my hubris, I assumed I knew everything at that point in my career. It would take years to fill the gaps in my knowledge to become a good programmer. With project after project under my belt, I would finally become a respected developer around a decade later.

What has my lack of degree caused me? During my entire career, I have found countless companies that wouldn’t even talk to me because I didn’t have a degree – they were unwavering on their requirements for a BS in Comp Sci even though I had been working in the field for years. At various companies I worked at, it became obvious that I would never be promoted simply because I didn’t have a piece of paper. Even though I could code better than my peers, my lack of degree held me back.

At no point has my path been easy. It’s involved an incredible amount of work and sacrifice. And if you’re thinking of teaching yourself to program and find a job, I wish you the best of luck as you are about to find out that it takes far more than watching a few online videos and making a webpage.

If you do choose this path, how can you make it successful? Passion – you must have an unwavering passion to write code. You need to spend every waking hour writing code, reading books, watching videos, and doing everything you can to become a good programmer. Expect to put in substantial time and effort. Expect to struggle finding your first job. Expect difficulty advancing in your career. Don’t think for a minute that it’ll be easy – I can promise you, it won’t.

Picking a Server Platform

Many small businesses want websites or mobile applications that require server-side functionality. Often, this functionality includes a database to store user information. What options exist for a user? What are the pros and cons for each one? I will examine three different frameworks – Node and SQLite3, PHP and MySQL, and Tomcat. These represent just a small number of options available to a business – from small scale applications to enterprise solutions.

For small applications, I like Node and SQLite. Node is a simple platform to run server-side programming. Since it requires virtually no infrastructure, Node services can be installed and deployed in minutes.  Likewise, SQLite requires no installation. SQLite databases are a simple file that can be easily backed up or restored by copying the database file. While this framework is great for small applications, enterprise applications would benefit from more robust environments. Node and SQLite can work really well for small internal applications or to implement a small number of services using a small database.

Next up, PHP and MySQL. This combo is widely deployed on a variety of platforms. In fact, that’s one of the reasons I like it. Typically, service providers like GoDaddy have support for PHP and MySQL out of the box, so applications and services can be deployed without much effort.  PHP/MySQL is also more robust than Node/SQLite. On the negative side, PHP has a variety of versions that are substantially different and PHP code can easily become unmanageable if developers aren’t careful.  I like this solution for smaller customers needing a small number of services on an existing PHP server.

Finally, there’s Tomcat. This option is an excellent Java-based server bringing all the advantages of the Java programming language into a robust server environment. Tomcat can integrate with any database, but MySQL is a common solution. Tomcat is an excellent option for Java web applications or services, but suffers from one big problem – it’s the most complicated option to setup.  This option is best when a large number of users or a large database must be supported. This is the option I like to recommend for enterprise customers.

Numerous other databases and server platforms exist. Microsoft’s .NET platform can work great for customers who prefer Microsoft products and Ruby may be a desirable option for some customers too. As with all technology choices, server platforms must be selected based on customer requirements. Small customers appreciate rapid, low cost development while larger customers will want more robust solutions while being less price sensitive.

Software as an Iterative Process

The world can broadly be divided into the physical world and the digital world.

In the physical world, products are manufactured and deployed. Those products never again see the manufacturer. If there is a problem, a new process may be implemented to solve the problem; but customers with the existing product will not likely see the benefit of the new process.

In the digital world, products are created and deployed just like in the physical world. However, everything that follows is different. When software problems are identified, patches are created or new versions are deployed. When the customer accesses the patches or upgrades their software, they see the benefit of the new changes.

This difference allows for a vastly different approach to creating and deploying products in the digital world. Unlike physical products, software can benefit from an iterative process. Software can be modified today, tested tomorrow, and deployed the following day. What if it doesn’t work? The changes can be rolled back or a new patch can be deployed. Unlike a physical product, a digital product is never complete.

This huge paradigm difference means that software companies can be far more nimble than manufacturing companies. Since changes can be made at any time, software companies enjoy far less risk than manufacturing companies.

As a developer of custom software, I often work with customers that are uncomfortable with this process. These customers want to wait until all functionality is present or until everything is fully polished and tested. This makes sense in the physical world, but can actually be detrimental in the software world.

Why is it important to use an iterative approach to software development? Since deploying software can be done quickly and changes can be rapidly fixed if necessary, frequent software releases allow customers to receive updates more quickly than if they waited for a major release. This allows customers to provide feedback if the features are not useful or if they work improperly. This feedback creates a loop that allows developers to go back and ‘get it right’ if they need to in a more controlled environment. Without this loop, it’s possible to spend months developing features that don’t actually meet user requirements or to create bugs that require substantial cost to find and rectify. This means wasted development time as well as missed opportunity costs. Additionally, small frequent software updates means that each individual update can be tested independent of other changes and issues are kept much smaller than in large updates. Frequent software updates also means a decreased time-to-market with new features.

Iterative approaches to software improve user experience, improve time-to-market, decrease difficult to find bugs, and shorten and simplify test cycles. Are you frequently releasing software updates, or are you treating software like it’s a physical commodity?

Read the Fine Print

Yesterday, I was contacted by a customer who wanted to add push notifications to their mobile application. This is a common desire from customers, but they generally don’t have the infrastructure to support push notifications. Push notifications require a server to generate the notifications as well as some type of software to allow the user to create and send those notifications. Without any server infrastructure, smaller businesses are left without the ability to implement push notifications. Of course, there are a variety of services available for users to overcome this problem. One such service is OneSignal. OneSignal allows for very rapid implementation of push notifications using their servers and infrastructure. A developer can have everything setup and ready to demo in less than 15 minutes. Before suggesting this solution to the customer, I wanted to see pricing options and terms of service. But when I looked for pricing, I found it was completely free – there were no pay options. That sounded great, but I knew there had to be a catch – after all, businesses have to make money somewhere! As I read the Terms of Service, I was shocked to see:

Licensee acknowledges and agrees that the SDK enables Licensee to collect certain information from end users (“End Users”) of the SDK’s functionality (collectively, “SDK Information”), which generally helps provide developers with functionality to target and personalize the notifications they send to end users. This data collected includes: End Users’ mobile advertising identifiers, such as Apple IDFAs and Android Advertising identifiers; End Users’ email addresses End Users’ IP address, device push token, precise location (e.g., GPS-level) data, network information, language, time zone, product preferences, and privacy preferences.

So, this free service is collecting just about every piece of information possible about the user. While this may be ok for some apps, for many apps this would constitute a pretty substantial privacy invasion. Now, for my application, I would have to craft my own Terms of Service and ensure the user was aware that I was collecting the above information. In the tech community, do we read the terms of service, particularly when they will impact our end users, or do we just ignore them? In this instance, I am very glad I dug further. While I found OneSignal to be an awesome product, the terms of service are simply incompatible with the application I’m working to deploy.

Loosely Coupled Systems

One of the principles of modern engineering is to create loosely coupled systems. But what does that mean and why is it so important?

In the past, it was common to create huge systems that included code for a wide variety of different functions. For example, an eCommerce system may include code for accessing the database, processing credit card payments, and interacting with an inventory management system. While having all the code in one place may sound great, it has some serious drawbacks. For example, maintenance becomes increasingly difficult the larger an application becomes. Additionally, testing must include the entire system, and deployment is an all or nothing deal. Upgrades are also more difficult as the entire system must be upgraded at once and opportunities for regression errors multiply.

Today, systems strive to be modular and loosely coupled. One piece may do credit card processing, another service may provide database access, and still another service may provide email support. While there are more pieces, these pieces can be assembled into a wide variety of configurations across different applications and these modules can be more easily tested. Once the credit card service is deployed, for example, it does not need to be changed or tested again until new features are required or bugs are found. Each piece can use the best technology for the task, and upgrades can happen on a per-service basis.

Currently, these services are often deployed as JSON-based REST services. These types of services are now becoming ubiquitous. A large variety of publicly available services are available for things like weather data, stock quotes, ISO country codes, etc.  This modular approach not only decreases development effort, it improves application stability as well.

On any project you’re involved in, whether it’s in writing the code or managing the project, modularity and loose coupling should be one of the most important guiding principles.

Getting Started with Artificial Intelligence

It seems that artificial intelligence is in the news more and more. Most larger companies use AI for something within their business, and more and more businesses are finding ways to improve their organization with AI. Purchase recommendation systems, self driving cars, video games, language translation apps, and route mapping software are just a few examples of artificial intelligence we see every day.

But where does someone interested in AI get started?  AI can be very complicated, but that doesn’t mean you can’t get involved too! One of my favorite resources for learning AI is the series Artificial Intelligence for Humans by Jeff Heaton. In his books, he covers a variety of topics including genetic algorithms, machine learning, clustering, linear regression, swarm algorithms, and so much more. While these topics can be complicated, Jeff presents them without all the math in a way that is far more readable than most texts on the topic.

If you are interested in AI, one of the first questions to answer is what programming language you want to use. While any language can be used for AI, the bulk of the tools and frameworks available exist for Java, C++, Python, and R. If you’re big into number crunching and Big Data, R may be the obvious choice. If you’re not a programmer, Python may be easier. For existing developers, Java or C++ may be best.

What type of AI do you start with? Try genetic algorithms or swarming algorithms. Genetic AI assumes the answer to be like a genetic genome and, through a series of mutations and genetic splicing, attempts to find an answer.  Swarming algorithms look at groups of objects and attempts to have them behave like a cohesive team. Swarming algorithms are great for games with AI controlled enemy armies and are commonly used by game developers. Other common, and simple, AI algorithms include K-Means clustering (used to group objects by similarity),  linear regression (used to predict unknown values using relatively simple algebra), or path finding algorithms such as Dijkstra’s Algorithm.

Once you understand the basics of AI, you can move onto frameworks like DeepLearning4J or TensorFlow to help create Neural Networks (a far more advanced type of AI) or look into libraries like OpenCV for tinkering with Computer Vision.

Whether it’s creating an emulation of a fish tank using swarming algorithms, solving the Traveling Salesman Problem using genetic algorithms, or calculating a path through a maze using Dijkstra’s Algorithm, artificial intelligence is loads of fun.

Solidity

Recently, I was contacted by a company that does cryptocurrency to write smart contracts. I’ve never done this before, but I have no problem learning a new language – particularly for a blockchain technology. So what is a smart contract? Smart contracts are an amazing technology that allows you to run code on the blockchain.  Code is written in a language called Solidity, which is similar to Java or JavaScript. Once the code is written, it’s deployed to the blockchain where it can be called by others later. It’s similar to a cloud application, except the cloud is the blockchain instead. It’s always available because there are always machines mining cryptocurrency. Many sources are exclaiming how this kind of technology will change the world, and it’s easy to see why. The ability to write contracts in code that will later be triggered will be hugely impactful to banking, insurance, and countless other fields.

So what does Solidity code look line anyway? Here’s a simple Hello World app in solidity:

pragma solidity ^0.4.24;

contract HelloWorld {
    event log_string(bytes32 log);
    
    function() public {
        emit log_string("Hello World!");
    }
}

If you’re interested in learning more about Solidity, check out the MetaMask plugin for Chrome as well as the Remix IDE and find yourself a good online video for Solidity development. I feel confident that the future holds countless opportunities for developers who master this technology!

Minimum Viable Product

An important, but often ignored, concept in the realm of software development is the notion of Minimum Viable Product. Defining a Minimum Viable Product (or MVP) provides several benefits for software development teams and companies. First, let’s define Minimum Viable Product. The MVP is simply the absolute minimum requirements for a working product. For instance, if you’re writing a calculator application for a mobile device, the MVP would simply add, subtract, multiply, and divide numbers. It would not, however, need to perform more complicated functions such as square root, exponents, trig functions, averaging, etc. These features would be added to future versions if necessary.

What makes this idea so important?  When we define the Minimum Viable Product, we can easily accomplish several things. First, when we eliminate all the fluff, we can focus on core functionality and release a product quickly. This reduces time to market and increases revenue generation from the application. Second, creating a functional application more quickly allows decision makers within the organization to rapidly determine if the software meets the necessary requirements that the product was intended to solve. This ability for management to be involved earlier in the process ensures that development resources don’t end up heading down the wrong path. Third, deploying the MVP early ensures that customers and end users can interact and provide feedback early in the process when it’s far easier (and cheaper) to change direction instead of waiting for a months or years long process.

Unfortunately, many organizations insist that every feature is essential to the application which ensures long development cycles and dramatically longer time-to-market. Meanwhile, more agile teams are capturing marketshare before you.

When thinking of a new product, software application, tool, or mobile application, the first objective is to define the MVP so that you can quickly create a prototype, verify functionality, ensure essential functionality is present, and then get your software in the hands of users. Only then do you circle back for round two where you begin to add new features by starting with the most important tasks. This process will not only improve your marketshare, but it will also help ensure that the products and services being created actually bring value instead of spending years on software projects that bring no measurable value.

Work in Progress

When most people think about DevOps, Continuous Integration (CI), or Continuous Deployment (CD) they think about tools like Jenkins or Bamboo. And, indeed, these tools are indispensable for rapidly moving software through the development pipeline. However, an even more rudimentary principle is required to truly enjoy the benefits of any CI/CD environment – limiting work in progress (WIP). Work in progress is the number of items that are currently being developed. The higher the number, the more difficult and time consuming deployments become. And, conversely, the lower of number of features in development, the lower the risk for the deployment and the more rapidly it can be deployed to live systems. Limiting the WIP to a small number – a single new feature or a couple of important bug fixes – ensures that you can get the changes tested and deployed in short order. This goes well with the ideals of agile development – quick sprints where a small number of items are completed – and ensures that focus is kept on the most important items. Once that sprint is complete, a new version can be pushed to end users. This ensures users have want they want, turnaround time is low, and bugs are kept to a minimum as changes are small and easily tested. This principle applies well beyond software. In fact, it was originally documented on factory floors. Creating products in small batches ensures that if there is a problem, it’s limited to a small batch and that the first shippable products are available more quickly than large batch processing. How does your company handle work in progress? A small number of items, or huge batches? Does it take forever to release a new version or to have a product ready to market? Consider the impact of limiting the work in progress to a small number and frequently releasing your software or product and how it could impact your time-to-market and ultimately increase revenue!

Requirements Analysis

One of the most important skills for a developer is the ability to analyze requirements and determine the most appropriate solution. For many developers, this means they will determine the best suite of tools that fits within their preferred development environment. For example, a Java developer may decide if JDBC or JPA is a better option for connecting to their database. Note, it was already assumed that the application would be written in Java – other options like C# or PHP were ignored because the developer making the choice was a Java developer. The problem is that there may be better options depending on the requirements of the project. For example, I am currently working on a mobile project that uses a JavaScript framework. One of the requirements of the app is to create a  fairly detailed PDF document. This document was created using pdfmake – an excellent toolkit for making PDF documents based on a simple configuration file. However, as the project grew, it was requested to have a web service written that would be given the configuration, generate the PDF, and send an email. What server language would I use? I could use Java – but do I want to rewrite the entire PDF generator in a new language? Absolutely not. What else could work? In the end, I opted to use a Node.js solution. Why? Because it would be utterly trivial to use the existing JavaScript PDF code within a Node application. In fact, I managed to write all the necessary functionality in a single file with less than 100 lines of code. Had I selected Java, it would have easily grown into several dozen file, hundreds of lines of code, and substantially more billable hours.

Unfortunately, technology decisions are made every single day by organizations that insist on a language or framework before the requirements are even known. Better options may exist, but lack of knowledge of competing technologies prevents their selection. In the end, projects take longer to develop, cost more, and become increasingly difficult to maintain. Certainly no developer can be an expert in every technology, but any more senior developer should be able to provide a variety of competing solutions to any problem as well as indicate the pros and cons of each. When the best technology is selected, projects come in ahead of schedule and under budget. Time-to-market is decreased, maintenance costs are minimized, and – in the end – the organization benefits.