AI Resources

Today, there are countless AI products and resources available for the developer. I’d like to review a few of the resources I’ve used.

AIFH

The first resource on my list is AI For Humans (AIFH) by Jeff Heaton. Of all the books on programming artificial intelligence, his books are by far the best. Written without any heavy math, Jeff’s books explain just about everything you could want to know about artificial intelligence. Additionally, Jeff has his own framework called ENCOG which can be used in Java, or can be run as a standalone GUI for development. Of all the resources I’ve used to date, this is without hesitation the best.

AWS DeepLens

I was recently sent an AWS DeepLens by a client for an artificial intelligence project. It’s always exciting to receive hardware from a client, so I was certainly looking forward to this! Unfortunately, my experience with it has been a bit less exciting. For starters, I’m stuck trying to figure out what exactly the purpose of this box is. Am I to use this to learn artificial intelligence? Is this box intended to be embedded within a product? I have absolutely no idea. While the DeepLens is a neat toy, the setup is far from simple. An internet connection to AWS is needed, certificates need installed on your machine, and everything is configured through your Amazon account. Even worse, the service does not appear to be free. (Note, you do get one free year on AWS, but after that you pay for the service. After that, I have no idea what it costs to run the DeepLens, but I’ve noticed warnings about costs associated with some actions). I applaud Amazon for trying to bring deep learning to the masses, but I think this product is a dud.

DL4J

DeepLearning4Java (DL4J) is a Java library for deep learning AI. Written by SkyMind, DL4J is one of the most well-known AI libraries for Java. With the earlier versions of DL4J, the user would have to install various native libraries such as ND4J (N-Dimensional Arrays 4 Java). This proved to be more difficult than it sounds as various libraries were dependent on other libraries, documentation was scarce, and error messages were cryptic at best. Fortunately, with the 1.X version of DL4J, the install process is streamlined to sampling cloning a repository and running a maven build. Native libraries are managed within the maven build, saving users the trouble from earlier versions. With these changes, DL4J is an excellent framework I would recommend for any AI project.

Killer Robots

During the last year, countless tech leaders have talked about the danger that artificial intelligence could pose in the future. Like most people, I laughed at them. After all, do I really think that The Terminator or The Matrix were prophetic? Hardly. But the more I read and the more I pondered it myself, the more concerned I became. Now, I wonder if there’s any way to prevent it from happening at all.

Is it really reasonable to think AI could take over the world? Do we really think code will be so poorly written and that software testers will be so incompetent as to let AI robots kill humanity? Unfortunately, I do. Not intentionally, of course, but bad code that wasn’t properly tested will make it into the wild on robots. Consider all the system updates that have been performed on your computer or your cell phone. Think about all the app updates that happen every single day. Consider all the one star reviews for apps on the mobile stores. AI will be no different.

Consider all the potential causes of AI issues. Developer errors, inadequate testing, corporate release requirements, poorly defined ethics, unforeseen events, etc. Each one of these issues could cause AI to perform in ways it was not intended with potentially catastrophic consequences. Consider government AI being developed by the lowest bidder – wow, that’s scary.

The more I think about that, the more certain I become that AI will eventually cause huge problems to the world. As such, it’s imperative that we – the tech community – consider the limits of AI – not in regards to technology, but rather in regards to safety and security. Do we want AI police officers or soldiers? That sounds dangerous. Could Russian hackers embed “Order 66” into our own robot army? Do we trust robots with firearms to make the appropriate decision in a life-or-death situation?

My intent is not to sound like an alarmist, but rather to begin thinking about the issues now. If not, we may find it’s too late to do so later.

Social Networking

On Tuesday, my daughter and I embarked on an epic rail journey from Anaheim, California to Altoona, Pennsylvania. As I write this, I’m sitting in the lounge at the Chicago Amtrak station. Tomorrow, after another two trains, I’ll finally arrive home.

Anybody who knows me knows I love to travel by rail. While air travel is fast, it’s rarely fun. Rail is the opposite – much slower, but generally an enjoyable experience. One of the best parts of the train is the dining car. Not only is the food good, but due to the limited seating, you end up sitting with strangers. This is always a great way for real social networking. Tuesday night, my daughter and I sat with a couple retired from John Deere. He worked in the factory, and she spent years working in HR and in training other divisions. She was also a frequent traveler to Europe. The next day, I shared a meal with an outspoken Trump supporter and handwriting analysis export, a rail advocate and teacher, and a man who inherited land in the deserts of California. This morning, I shared breakfast with a pastor and his wife from Minnesota who work with troubled inner-city kids.

Unlike Facebook, Twitter, and other so-called ‘social network’ sites, riding the rails gives me an opportunity to share a meal with people across the country. People who are from different states, with different political views, different religious views, and every other difference imaginable. And, unlike social networking sites, discussions are typically cordial and enjoyable. I get to interact with people as people – not as a digital persona. I get to see them as a human – not as a highly curated avatar. These experiences are what real social networking should be about – not an anti-social experience behind a computer screen.

Today, instead of interacting with people on a computer, why not invite a friend for coffee, have dinner with your neighbor, or invite a coworker over for a burger. You’ll find the social interaction far more rewarding than Facebook.

Picking a Server Platform

Many small businesses want websites or mobile applications that require server-side functionality. Often, this functionality includes a database to store user information. What options exist for a user? What are the pros and cons for each one? I will examine three different frameworks – Node and SQLite3, PHP and MySQL, and Tomcat. These represent just a small number of options available to a business – from small scale applications to enterprise solutions.

For small applications, I like Node and SQLite. Node is a simple platform to run server-side programming. Since it requires virtually no infrastructure, Node services can be installed and deployed in minutes.  Likewise, SQLite requires no installation. SQLite databases are a simple file that can be easily backed up or restored by copying the database file. While this framework is great for small applications, enterprise applications would benefit from more robust environments. Node and SQLite can work really well for small internal applications or to implement a small number of services using a small database.

Next up, PHP and MySQL. This combo is widely deployed on a variety of platforms. In fact, that’s one of the reasons I like it. Typically, service providers like GoDaddy have support for PHP and MySQL out of the box, so applications and services can be deployed without much effort.  PHP/MySQL is also more robust than Node/SQLite. On the negative side, PHP has a variety of versions that are substantially different and PHP code can easily become unmanageable if developers aren’t careful.  I like this solution for smaller customers needing a small number of services on an existing PHP server.

Finally, there’s Tomcat. This option is an excellent Java-based server bringing all the advantages of the Java programming language into a robust server environment. Tomcat can integrate with any database, but MySQL is a common solution. Tomcat is an excellent option for Java web applications or services, but suffers from one big problem – it’s the most complicated option to setup.  This option is best when a large number of users or a large database must be supported. This is the option I like to recommend for enterprise customers.

Numerous other databases and server platforms exist. Microsoft’s .NET platform can work great for customers who prefer Microsoft products and Ruby may be a desirable option for some customers too. As with all technology choices, server platforms must be selected based on customer requirements. Small customers appreciate rapid, low cost development while larger customers will want more robust solutions while being less price sensitive.

Read the Fine Print

Yesterday, I was contacted by a customer who wanted to add push notifications to their mobile application. This is a common desire from customers, but they generally don’t have the infrastructure to support push notifications. Push notifications require a server to generate the notifications as well as some type of software to allow the user to create and send those notifications. Without any server infrastructure, smaller businesses are left without the ability to implement push notifications. Of course, there are a variety of services available for users to overcome this problem. One such service is OneSignal. OneSignal allows for very rapid implementation of push notifications using their servers and infrastructure. A developer can have everything setup and ready to demo in less than 15 minutes. Before suggesting this solution to the customer, I wanted to see pricing options and terms of service. But when I looked for pricing, I found it was completely free – there were no pay options. That sounded great, but I knew there had to be a catch – after all, businesses have to make money somewhere! As I read the Terms of Service, I was shocked to see:

Licensee acknowledges and agrees that the SDK enables Licensee to collect certain information from end users (“End Users”) of the SDK’s functionality (collectively, “SDK Information”), which generally helps provide developers with functionality to target and personalize the notifications they send to end users. This data collected includes: End Users’ mobile advertising identifiers, such as Apple IDFAs and Android Advertising identifiers; End Users’ email addresses End Users’ IP address, device push token, precise location (e.g., GPS-level) data, network information, language, time zone, product preferences, and privacy preferences.

So, this free service is collecting just about every piece of information possible about the user. While this may be ok for some apps, for many apps this would constitute a pretty substantial privacy invasion. Now, for my application, I would have to craft my own Terms of Service and ensure the user was aware that I was collecting the above information. In the tech community, do we read the terms of service, particularly when they will impact our end users, or do we just ignore them? In this instance, I am very glad I dug further. While I found OneSignal to be an awesome product, the terms of service are simply incompatible with the application I’m working to deploy.

Language Overload

As a technology enthusiast, I live in a great time. Computing devices are everywhere, artificial intelligence is advancing by leaps and bounds, and hardware platforms such as Arduino and Raspberry Pi have made it easier for people to tinker with new technologies without spending a lot of money. But with all the great advances in technology, there is one advance I do not enjoy – the endless list of new programming languages released every single year. I’m not opposed to new languages, they are a necessary part of the march of progress. But don’t we have enough already? One example is the proliferation of languages that operate on the Java Virtual Machine. Originally just Java, now we have Scala, Groovy, and Kotlin too. And, each one has their own group of advocates saying their language is the best.

When Apple announced a few years back that they were replacing Objective-C, I was originally optimistic. Objective-C is pretty much unused outside the Mac world. If you wanted to write iOS applications, you had to learn an otherwise useless language. Was Mac going to lower the bar for new developers and allow them to use a language that was already widely used? Nope – they invented another language – Swift. I’ve heard lots of great things about Swift, but I’m not really interested in learning another niche language to develop for a single platform. (Instead, I’ve chosen to use hybrid tools such as Cordova and Ionic).

What’s the harm in all these languages? While different languages do bring different things to the table, there has to come a point where the market is over saturated. With all the languages out there, developers have to pick which ones to learn and which to skip. While every developer should be competent in more than one language, it’s certainly not realistic to expect a developer to be an expert in a dozen different languages. And since every language needs their own libraries of code, scores of developers are wasting their time writing standard functions for another language. Solving a problem that has already been solved a dozen times in the past – with a new language – is not particularly useful.

Where do we go from here? I suspect that we will continue to see the proliferation of languages. Long term, we will see languages transition to legacy code far sooner than previously. This will make it increasingly difficult for companies to maintain their codebase and will mean more frequent rewrites of software applications just to stay current. For companies, I would suggest ensuring that the languages you choose are solid, stable, mature, and likely to be around long term. You can go for the newest language out there, but you’ll struggle to find developers now and you’ll likely end up having an ever more difficult time maintaining the application long-term.

Loosely Coupled Systems

One of the principles of modern engineering is to create loosely coupled systems. But what does that mean and why is it so important?

In the past, it was common to create huge systems that included code for a wide variety of different functions. For example, an eCommerce system may include code for accessing the database, processing credit card payments, and interacting with an inventory management system. While having all the code in one place may sound great, it has some serious drawbacks. For example, maintenance becomes increasingly difficult the larger an application becomes. Additionally, testing must include the entire system, and deployment is an all or nothing deal. Upgrades are also more difficult as the entire system must be upgraded at once and opportunities for regression errors multiply.

Today, systems strive to be modular and loosely coupled. One piece may do credit card processing, another service may provide database access, and still another service may provide email support. While there are more pieces, these pieces can be assembled into a wide variety of configurations across different applications and these modules can be more easily tested. Once the credit card service is deployed, for example, it does not need to be changed or tested again until new features are required or bugs are found. Each piece can use the best technology for the task, and upgrades can happen on a per-service basis.

Currently, these services are often deployed as JSON-based REST services. These types of services are now becoming ubiquitous. A large variety of publicly available services are available for things like weather data, stock quotes, ISO country codes, etc.  This modular approach not only decreases development effort, it improves application stability as well.

On any project you’re involved in, whether it’s in writing the code or managing the project, modularity and loose coupling should be one of the most important guiding principles.

Gaming Addiction Disorder?

I read a recent article indicating that the World Health Organization had recognized gaming addiction as a disorder. Decisions like this should bring fear  to all tech organizations. As we strive to create compelling content, we run the risk that someone may become ‘addicted’ to our content in a way that negatively impacts their life. Websites like Facebook, Pinterest, Thingiverse, and others can easily become a time blackhole with an afternoon gone before you even realize it. Isn’t that what we want though? We want to attract users and keep their interest. But what happens when courts rule that the addiction that we caused created a negative financial impact on the user? Will content providers like Netflix be sued in class action lawsuits because people couldn’t stop binge watching the newest TV show? How far will this go? Will we have a future where employees who watch TV all day can’t be fired because they suffer a recognized disorder? Must we, as employers, provide reasonable accommodations for their disorder? How about social programs – will we be required to provide welfare because someone has become so ‘disabled’ that they can’t work?

When I was younger, I played a lot of games too. Hours spent playing The Legend of Zelda, Super Mario Brothers, and other Nintendo games; and even today I enjoy playing classic NES games. But gaming is only one part of my life. I enjoy reading, learning foreign languages, studying astronomy, playing board games, and so many other things. Gaming has never become an all consuming obsession in my life.

I am concerned to see how this will play out and what the ramifications may be, but I do know this – the future will be filled with people suffering from carpal tunnel and vision problems looking back on their life filled with the regret of having sold their dreams for a digital fantasy world that left them empty in the end. And we, as taxpayers and businesses will be left with the financial burden.

The Value of Open Standards

One of the most frustrating aspects of working in the tech world is dealing with proprietary systems, protocols, standards, and languages . Countless technologies do things their own way, even when standards exist for the technology. Some vendors like to avoid open standards simply so they can hold a segment of the tech population hostage. Unfortunately, many of the biggest offenders are also among the largest tech companies out there. Because of their influence, they ignore standards and create their own. These vendors may argue that they created their own way of doing things because the standard isn’t robust enough to do what they want. While this may occasionally be true, it’s often an excuse. A shining example of this problem is the Swift language created by Apple. Did we really need a new language for iOS? Certainly it was time to retire Objective-C (another Apple only language), but did Apple really need to create a new language? Were there no other languages that would have worked to achieve your objective? I doubt Swift was really necessary. Microsoft has done the same thing with their C# language. While I personally like the language, did they really need to create their own clone of Java?  How about Microsoft’s Active Directory? If you’ve ever tried to integrate a non-Windows machine, you quickly see how painful it is. These are just a few examples, but I am sure everyone reading this has experienced issues where things weren’t compatible that should have been if the vendors simply followed existing standards.

Why does this really matter? Because incompatibility negatively impacts consumers. When vendors create proprietary protocols, the consumer often loses. Time is wasted trying to connect a Mac to a Windows network. Programmers spend time learning another language that is useless outside of a specific niche. Money is wasted for additional software to convert between file standards. Users waste time fixing an OpenOffice document that doesn’t import nicely into Microsoft Office.

As new technologies continue to come out, this problem is only going to get worse unless we start working today to create open standards and to follow existing standards.

Lifelong Learning

I’m a nerd – it’s hardly a secret. My favorite hobby is learning. From math to foreign languages to astronomy to theology – I read and learn all the time. That’s one of the things that makes being a programmer such an amazing job. For most people, work is the same today, tomorrow, next week, next year. Over and over again, the same tasks, the same problems. Software engineering is completely different. Rarely are the problems the same from project to project. Every single day, a new problem appears for development teams to tackle. A new user experience to create. A new bug to find. But it doesn’t stop there. Technology never stands still. Technologies that appear cutting edge today will seem antiquated in a year. The libraries that make everything seem so easy this week will be cumbersome this time next year. For twenty years, I have watched new languages spawn, new paradigms of programming become popular, new technology stacks gain prominence, and new tools take the limelight. As I was writing some JavaScript code today, I learned a new way to solve a problem that repeatedly comes up in my projects. Next week, I’ll learn another novel way to solve another problem. That’s the life of a programmer – an inexorable march forward toward the ever elusive notion of being an expert. Indeed, what makes programming so fun is that, no matter how much you learn, it’s an intractable problem that beckons you back for more every single day!