Dangers of Artificial Intelligence


With the growth of artificial intelligence, one of the subjects we don’t discuss enough is the possible dangers that it may create. While AI may help us better drive our cars or provide a rapid, more accurate diagnosis of medical issues; it may also create problems for society. What are those problems and what should we do to minimize those risks?

Poorly Tested Code

As a software engineer, my biggest worry is that poor-quality code will be widely deployed in artificial intelligence systems. Look around today, and you will see what I mean. I use a Mac, and the current version of Safari is riddled with bugs. Indeed, nearly every application on my computer has several updates per year to address bugs.

This is caused, in part, by the demands of businesses. I have worked for many companies over the years who desired to push out a new version even when some known bugs existed. For the business, this is necessary to ensure they beat the competition to release new features. However, this acceptance of buggy software can be disastrous in the world of AI.

For example, what happens when the artificial intelligence system misdiagnoses cancer? For the individual, this could have life-altering effects. What about the self-driving car? Someone could be hit and killed.

How good is good enough for artificial intelligence? I don’t have an answer, but it is something businesses need to strongly consider as they dive deeper into the world of AI.

Deep Fakes

A growing concern for artificial intelligence is how it could be used by organizations or political entities to persuade consumers or voters. For example, a political adversary of the president could create a fake video of the president engaged in some behavior that would bring discredit upon him. How would the electorate know it is a fake? Even worse, how could our nation’s enemies use fake videos for propaganda purposes here or abroad?


In many ways, advances in artificial intelligence are very similar to the changes during the industrial revolution. As AI becomes more advanced, we can expect to see more and more jobs performed by intelligent robots or computer systems. While this will benefit businesses who can cut payroll, it will have a negative impact laborers who can easily be replaced.

What Should We Do?

This is just a very small list of potential issues. Indeed, numerous techies have discussed countless other risks we face as we adopt more AI-based systems. But what should we do? The value of AI to our lives will be profound, but we must start to consider how we will address these challenges from both a legal and a societal perspective.

For example, we may need to create laws regarding liability for AI systems to ensure that businesses provide adequate testing before deploying systems. But problems like deep fakes and employment aren’t as easy to fix. We can certainly provide training to individuals who are displaced by AI, but as more and more jobs are replaced, where will all the workers go?

I don’t have the answers. However, I think it is time for techies and non-techies alike to start asking the questions so that we can reap the benefits of improving artificial intelligence while mitigating the potential risks

Basics of Artificial Intelligence – IX

After an artificial intelligence algorithm is selected and trained, the final step is to test the results. When we started, we split the data into two groups – training data and testing data. Now that we’re done training the algorithm with the training data, we can use the test data for testing. During the testing phase, the algorithm will predict the output and compare to the actual answer. In most datasets, some data will be incorrectly predicted.

The confusion matrix allows us to get a better understanding of the incorrectly predicted data by showing what was predicted vs the actual value. In the below matrix, I trained a neural network to determine the mood of an individual based on characteristics of their voice. The Y axis shows the actual mood of the speaker and the X axis shows the predicted value. From my matrix, I can see that my model does a reasonable job predicting fear, surprise, calm, angry, and happy but performs more poorly for normal and sad. Since my matrix is normalized, the numbers indicate percentages. For example, 87 percent of afraid speakers were correctly identified.

Creating the above confusion matrix is simple with Scikit-Learn. Start by selecting the best model and then predict the output using that classifier. For my code below, I show both the normalized and standard confusion matrix using the plot_confusion_matrix function.

classifier = mlp

# predict value
pred = classifier.predict(X_test)

# plot non-normalized confusion matrix
titles_options = [("Confusion matrix, without normalization", None),
                  ("Normalized confusion matrix", 'true')]
for title, normalize in titles_options:
    disp = plot_confusion_matrix(classifier, X_test, y_test,


With the above matrix, I can now go back to the beginning and make changes as necessary. For this matrix, I may collect more samples for the categories that were incorrectly predicted. Or, I may try different settings for my neural network. This process continues – collecting data, tuning parameters, and testing – until the solution meets the requirements for the project.


If you’ve been following along in this series, you should now have a basic understand of artificial intelligence. Additionally, you should be able to create a neural network for a dataset using Scikit-Learn and Jupyter Notebook. All that remains is to find some data and create your own models. One place to start is data.gov – a US government site with a variety of data sources. Have fun!