The Technology Ride

The world has gone through some amazing transformations during the last half a century. In the early 80’s computers were a rarity and cell phones were a novelty of wealthy business executives. During the 90’s, all that changed with the creation of Windows 95, which was really a pivotal point in the history of technology. Now, for the first time ever, computers were easy enough for the home user to use. A decade later, Apple would develop the iPhone followed by Google’s Android platform which would change the face of technology again. Today, computers and cell phones are ubiquitous.

My Experiences

Being born in the late 70’s, I have been able to witness this transformation first hand. In addition, I have had the incredible opportunity to take part in the creation of technologies myself. My career began in the US Army in 1995 where I served as a member of the Intelligence Community. I learned to use and administer SunOS and Solaris machines, and began my experimentation with programming. It was in this environment that I developed a love for Unix-based systems that continues to this day.

On those Unix machines, I started programming in C, C++, TCL, Perl, and Bourne Shell. While my first programs were pretty bad, I would eventually have an opportunity to write code for a classified government project. That code earned me a Joint Service Achievement Medal as well as making a profound impact within the intelligence community at the time.

After leaving the Army, I entered the civilian workforce to develop Point-of-Sale applications using C++. I would spend nearly two decades developing code for a variety of companies using various platforms and languages. I developed low-level code for phone systems, created custom Android operating systems, and programmed countless web and mobile applications.

Today

Now, I run my own business developing software for clients and creating artificial intelligence solutions. But when I look back, I am always amazed at how far the technology revolution has brought us, and I am thankful that I have had a chance to be a part of that revolution! With 20 years left before I retire, I can’t imagine where technology will take us tomorrow. Yet, I can’t imagine not being a part of that future!

For those who were born in the 90’s or in the new millennium, you will never know how much the world has changed. But for my generation, we watched it happen – and many of played a part in making it happen!

Red Flags in your Job Search

Job Search

Running a business full-time has been an incredibly rewarding experience for me. It has offered me substantially more freedom and control of my life than any other job has. However, I did consider reentering the workforce when the COVID pandemic started because I was unsure if my business would survive. When I was interviewing with companies, I observed several red flags that made me reconsider those businesses.

Unlimited Paid Time Off

A new trend in tech companies is unlimited paid time off. This sounds like a benefit, but I’m incredibly skeptical. Do you really believe you will be able to take unlimited time off? I doubt it. Furthermore, since you have not actually ‘earned’ any paid time off, you can likely expect management to complain about the vacation you are taking. Looking for that promotion? Jane in the adjacent cubicle didn’t take as much time off this year. Leaving your job? Don’t expect to get that nice bonus payout for your accrued vacation since you don’t have any. Frankly, I am convinced that unlimited PTO is just a scheme to have you take fewer days off, not more.

Concerns About Side Business

When I considered reentering the workforce, I was questioned about how long it would be until I closed down my business. While I had run my company as a side gig for years, interviewers didn’t want me to do that again. Why? Unless you intend to have me work 60+ hour weeks, what does it matter if I have a side business that does not compete with you? Conversely, I did have one individual actually say that my side business was a plus because it suggested that I would be learning more and growing professionally even outside of work hours.

Burdensome Legal Contracts

During my career, I have seen too many burdensome legal contracts. In one instance, a company wanted me to work as a subcontractor for them. Yet, they insisted that I sign both a non-compete agreement and an intellectual property agreement giving them ownership of everything I did. However, the business ignored the fact that I was operating in the same space they were and that signing a broad non-complete agreement would have ended my business. Furthermore, their intellectual property agreement gave them rights to all work I did – not just what I was doing for them. Thus, if work I performed for another company was patented, this company would have been able to claim ownership. Always be careful of the exact wording of contracts, particularly if you run a side business.

Pushy Recruiters

On one job, I saw too many red flags and opted out of continuing the interview process. Then, the recruiter started contacting me with high-pressure techniques to continue. She told me that the company’s stock options would be ‘life changing.’ The reality is, I truly doubt that the stocks they would have given me would have changed my life. Furthermore, I recognize that the recruiter gets paid to find candidates. Her high-pressure tactics were really just a way to get her a sizable commission if I were to take the job.

Conclusion

When you look for a job, never forget that the employer’s objective is to make money from your work. As such, they have a vested interest in minimizing your pay and benefits while maximizing the work that you do. Too many Americans already work excessive hours, and a disturbing number of businesses seem to encourage such behavior. If you are looking to change jobs, consider what you see in the interview process as well as what you can research online about the company before you make the decision and always keep your eyes open for red flags. Remember – you should work to live, not live to work. Make sure the company you work for has a similar attitude!

Overview of CompTIA Certifications

A variety of computer certifications exist today. Those certifications fall into one of two categories – vender-neutral or vendor-specific. In the vendor-neutral category, CompTIA is the industry leader. Most well-known for their A+ certification, CompTIA has been around for 40 years and certified over 2.2 million people.

Today, CompTIA issues over a dozen IT certifications for everything from computer hardware to project management. Beyond single certifications, CompTIA also offers what it calls ‘Stackable Certifications’. These certifications are earned by completing multiple CompTIA certifications. For example, earning both A+ and Network+ certifications will result in achieving CompTIA IT Operations Specialist certification.

Hardware Certifications

Individuals who want to work with computer hardware maintenance and repair should start with the A+ certification. This exam covers basic computer hardware and Windows administration tasks. For anyone wanting to work with computers, this exam covers the fundamental knowledge required for success.

Once you have mastered computer hardware, the next step is computer networks. This knowledge is covered by CompTIA’s Network+ certification. Topics in this exam include both wireless and wired network configuration, knowledge of switches and routers, and other topics required for network administration. Note, this exam is vender-neutral. As such, knowledge of specific routers (such as Cisco) is not required.

Security Certifications

CompTIA offers a variety of security certifications for those who wish to ensure their networks are secure or to test network security. The first exam in this category is the Security+ exam. This exam covers basics of security including encryption, WiFi configuration, certificates, firewalls, and other security topics.

Next, CompTIA offers a variety of more in-depth security exams on topics such as penetration testing (PenTest+), cybersecurity analysis (CySA+) and advanced security issues (CASP+). Each of these exams continue where the Security+ exam ends and requires a far more extensive knowledge of security. With all of the security issues in the news, these certifications are in high demand among employers.

Infrastructure Certifications

CompTIA offers several tests in what it calls the ‘infrastructure’ category. These exams are particularly useful for people who administer cloud systems or manage servers. Certifications in this category include Cloud+, Server+, and Linux+. If your organization utilizes cloud-based platforms, such as AWS or Google Cloud Platform, these certifications provide a vendor-neutral starting point. However, if you really want to dive deep into topics like AWS, Amazon offers numerous exams specifically covering their platform.

Project Management Certification

While not hardware related CompTIA offers an entry-level certification for project management called Project+. This exam is less detailed and time consuming than other project management certifications but covers the essential details of project management.

Conclusion

For the aspiring techie or the individual looking to advance their career, CompTIA provides a number of useful certifications. While certifications from other vendors may cost thousands of dollars, CompTIA exams are generally under $400. This is money well spent in the competitive IT world as CompTIA is one of the most respected names in vendor-neutral IT certifications.

Apple vs Android – A Developer’s Perspective

While most applications are developed for both iPhone and Android systems, new developers are faced with the choice of which platform to learn. While both Android and iPhone systems offer excellent apps as well as a variety of sizes, they differ considerably from a developer perspective.

Android Development

For the novice, Android development is probably the easier entry point. For starters, low end Android phones are cheaper to purchase than iPhones. But more importantly, Android developers can use Windows, Linux, or Mac machines for development. So, if you have a computer and an Android phone, you can get started right away.

The language used on Android phones is Java or Kotlin. While Kotlin is the newer language, more resources on Java development are available to get started. Furthermore, once you learn Java, you will find other development opportunities open up to you – such as backend services using frameworks such as Spring Boot.

Once you have learned how to program Android phones, you will find that other devices use Android as well. This includes Virtual Reality hardware such as Oculus, Augmented Reality glasses from vendors like Vuzix, and smart watches.

Publishing to Google is relatively simple too. Once you pay a one-time fee, you are a licensed developer and can create and deploy applications to the Google Play store. While there is some over sight from Google, it is less burdensome than Apple’s requirements.

iPhone Development

iPhone development is a little more complicated. For starters, you will need a Mac machine as the tools for iPhone development do not run under Windows or Linux. Furthermore, both Apple computers and iPhones tend to be more expensive for a small development setup.

While Android’s Java language is used everywhere, the iPhone’s Swift language is far more limited. In fact, Swift isn’t used outside of the Apple ecosystem. So, if you chose to develop other services to integrate with your phone, you will need to learn an additional language.

Unlike Android, few devices run iOS. Thus, your skills on iPhone development will not translate to the ability to program other devices aside from the Apple Watch.

Finally, Apple’s App Store is far more expensive and burdensome than the Google Play Store. For starters, Apple requires developers to pay an annual license fee – which is more expensive than Google’s one-time license cost. Furthermore, the Apple Store is much more strict with requirements for apps and provides significantly more oversight on the app market.

Conclusion

While I think both the Apple and Android phones are excellent, I personally find the Android developer experience to be more positive. This is particularly true for the indie developer or individual looking to learn mobile development.

What is Computer Vision?

Computer Vision is a rapidly growing technology field that most people know little about. While Computer Vision has been around since the 1960s, it’s growth really exploded with the creation of the OpenCV library. This library provides the tools necessary for software engineers to create Computer Vision applications.

But what is Computer Vision? Computer Vision is a mix of hardware and software tools that are used to identify objects from a photo or camera input. One of the more well-known applications of Computer Vision is in self-driving cars. In a self-driving car, numerous cameras collect video inputs. The video streams are then examined to find objects such as road signs, people, stop lights, lines on the road, and other data that would be essential for safe driving.

However, this technology isn’t just available in self-driving cars. A vehicle I rented a few months ago was able to read speed limit signs as I passed by and display that information on the dash. Additionally, if I failed to signal a lane change, the car would beep when I got close to the line.

Another common place to find Computer Vision is in factory automation. In this setting, specialized programs may monitor products for defects, check the status of machinery for leaks or other problematic conditions, or monitor the actions of people to ensure safe machine operation. With these tools, companies can make better products more safely.

Computer Vision and Artificial Intelligence are also becoming more popular for medical applications. Images of MRI or X-Ray scans can be processed using Computer Vision and AI tools to identify cancerous tumors or other problematic health issues.

From a less practical view, Computer Vision tools are also used to modify videos of user content. This may include things such as adding a hat or making a funny face. Or, it may be used to identify faces in an image for tagging.

Ultimately, Computer Vision technologies are being found in more and more places each day and, when coupled with AI, will ultimately result in a far more technologically advanced world.

What is the Dark Web?

Most people have heard of the Dark Web in news stories or tech articles. But what is it? How does it work? Is it worth visiting?

The Dark Web is a hidden network of highly encrypted machines available over the internet, but not using a typical web browser. While any content can be stored on the Dark Web, the majority of the content is of a questionable nature – such as child pornography, snuff films, drug and fake ID stores, and similar content. One such marketplace, Dark Amazon, provides users an Amazon-like shopping experience.

However, not everything on the Dark Web is illegal or unethical. In fact, the Dark Web can be a very useful tool for individuals in China to access Facebook (they have a Dark Web site) or for intelligence operatives in Iran to contact the CIA (they’re on the Dark Web too).

In short, the Dark Web is a useful tool if you are trying to remain anonymous or operating within a country that has strict internet controls. But what do you need to get started? Simple – download a TOR browser. TOR stands for The Onion Router. The idea of an onion is that there are multiple layers of encryption and that the traffic is routed through numerous machines to prevent tracking. TOR browsers exist for all major platforms – including mobile – and are really no different than any other web browser.

The real challenge, however, is finding content. For that, you will need a Dark Web search engine – such as TOR66. However, a few suggestions before you give it a try. First, run a VPN. While there is nothing illegal with using a TOR browser, you may draw suspicion from your ISP and it is always possible that TOR users are being watched by law enforcement. Second, make sure you have antivirus up-to-date on your machine – you never know what’s out there. Third, trust no one. The Dark Web is a place of thieves, con artists, drug dealers, and other people with questionable ethics.

Productivity Gains through Aliases & Scripts

For users of Mac or Linux-based machines, aliases and scripts can create some of the most valuable tools for increased productivity. Even if you run a Windows machine, there is a strong possibility that some machines you interact with – such as AWS – utilize a Linux-based framework.

So, what are aliases and scripts? Scripts are files that contain a sequence of instructions needed to perform a complex procedure. I often create scripts to deploy software applications to development server or to execute complex software builds. Aliases are much shorter, single line commands that are typically placed in a system startup file such as the .bash_profile file on MacOS.

Below are some of the aliases I use. Since Mac doesn’t have an hd command (like Linux), I have aliased it to call hexdump -C. Additionally, Mac has no command for rot13 – a very old command to perform a Caesar cipher which I have aliased to use tr.

Since I spend a lot of time on the command prompt, I have crated a variety of aliases to shortcut directory navigation including a variety of up command to move me up the directory hierarchy (particularly useful in a large build structure) and a command to take me to the root folder of a git project.

Finally, I have a command to show me the last file created or downloaded. This can be particularly useful, for example, to view the last created file I can simply execute cat `lastfile`.

alias hd='hexdump -C'
alias df='df -h'
alias rot13="tr 'A-Za-z' 'N-ZA-Mn-za-m'"
alias up='cd ..'
alias up2='cd ../..'
alias up3='cd ../../..'
alias up4='cd ../../../..'
alias up5='cd ../../../../..'
alias up6='cd ../../../../../..'
alias root='cd `git rev-parse --show-toplevel`'
alias lastfile="ls -t | head -1"

One common script I use is bigdir. This script will show me the size of all folders in my current directory. This can help me locate folders taking up significant space on my computer.
#!/bin/bash

SAVEIFS=$IFS
IFS=$(echo -en "\n\b")

for file in `ls`
do
        if [ -d "$file" ]
        then
                du -hs "$file" 2> /dev/null
        fi
done
IFS=$SAVEIFS

Another script I use helps me find a text string within the files of a folder.
#!/bin/bash

SAVEIFS=$IFS
IFS=$(echo -en "\n\b")

if [ $# -ne 1 ]
then
        echo Call is: `basename $0` string
else
        for file in `find . -type f | cut -c3-`
        do
                count=`cat "$file" | grep -i $1 | wc -l`
                if [ $count -gt 0 ]
                then
                        echo "******"$file"******"
                        cat "$file" | grep -i $1
                fi
        done
fi
IFS=$SAVEIFS

These are just a few examples of ways to use scripts and aliases to improve your productivity. Do you have a favorite script or alias? Share it below!

Dangers of Artificial Intelligence

Risk/Reward

With the growth of artificial intelligence, one of the subjects we don’t discuss enough is the possible dangers that it may create. While AI may help us better drive our cars or provide a rapid, more accurate diagnosis of medical issues; it may also create problems for society. What are those problems and what should we do to minimize those risks?

Poorly Tested Code

As a software engineer, my biggest worry is that poor-quality code will be widely deployed in artificial intelligence systems. Look around today, and you will see what I mean. I use a Mac, and the current version of Safari is riddled with bugs. Indeed, nearly every application on my computer has several updates per year to address bugs.

This is caused, in part, by the demands of businesses. I have worked for many companies over the years who desired to push out a new version even when some known bugs existed. For the business, this is necessary to ensure they beat the competition to release new features. However, this acceptance of buggy software can be disastrous in the world of AI.

For example, what happens when the artificial intelligence system misdiagnoses cancer? For the individual, this could have life-altering effects. What about the self-driving car? Someone could be hit and killed.

How good is good enough for artificial intelligence? I don’t have an answer, but it is something businesses need to strongly consider as they dive deeper into the world of AI.

Deep Fakes

A growing concern for artificial intelligence is how it could be used by organizations or political entities to persuade consumers or voters. For example, a political adversary of the president could create a fake video of the president engaged in some behavior that would bring discredit upon him. How would the electorate know it is a fake? Even worse, how could our nation’s enemies use fake videos for propaganda purposes here or abroad?

Employment

In many ways, advances in artificial intelligence are very similar to the changes during the industrial revolution. As AI becomes more advanced, we can expect to see more and more jobs performed by intelligent robots or computer systems. While this will benefit businesses who can cut payroll, it will have a negative impact laborers who can easily be replaced.

What Should We Do?

This is just a very small list of potential issues. Indeed, numerous techies have discussed countless other risks we face as we adopt more AI-based systems. But what should we do? The value of AI to our lives will be profound, but we must start to consider how we will address these challenges from both a legal and a societal perspective.

For example, we may need to create laws regarding liability for AI systems to ensure that businesses provide adequate testing before deploying systems. But problems like deep fakes and employment aren’t as easy to fix. We can certainly provide training to individuals who are displaced by AI, but as more and more jobs are replaced, where will all the workers go?

I don’t have the answers. However, I think it is time for techies and non-techies alike to start asking the questions so that we can reap the benefits of improving artificial intelligence while mitigating the potential risks

Basics of Artificial Intelligence – IX

After an artificial intelligence algorithm is selected and trained, the final step is to test the results. When we started, we split the data into two groups – training data and testing data. Now that we’re done training the algorithm with the training data, we can use the test data for testing. During the testing phase, the algorithm will predict the output and compare to the actual answer. In most datasets, some data will be incorrectly predicted.

The confusion matrix allows us to get a better understanding of the incorrectly predicted data by showing what was predicted vs the actual value. In the below matrix, I trained a neural network to determine the mood of an individual based on characteristics of their voice. The Y axis shows the actual mood of the speaker and the X axis shows the predicted value. From my matrix, I can see that my model does a reasonable job predicting fear, surprise, calm, angry, and happy but performs more poorly for normal and sad. Since my matrix is normalized, the numbers indicate percentages. For example, 87 percent of afraid speakers were correctly identified.

Creating the above confusion matrix is simple with Scikit-Learn. Start by selecting the best model and then predict the output using that classifier. For my code below, I show both the normalized and standard confusion matrix using the plot_confusion_matrix function.

# PICK BEST PREDICTION MODEL
classifier = mlp

# predict value
pred = classifier.predict(X_test)

# plot non-normalized confusion matrix
titles_options = [("Confusion matrix, without normalization", None),
                  ("Normalized confusion matrix", 'true')]
for title, normalize in titles_options:
    disp = plot_confusion_matrix(classifier, X_test, y_test,
                                 display_labels=data[predictionField].unique(),
                                 cmap=plt.cm.Blues,
                                 normalize=normalize)
    disp.ax_.set_title(title)

plt.show()

With the above matrix, I can now go back to the beginning and make changes as necessary. For this matrix, I may collect more samples for the categories that were incorrectly predicted. Or, I may try different settings for my neural network. This process continues – collecting data, tuning parameters, and testing – until the solution meets the requirements for the project.

Conclusion

If you’ve been following along in this series, you should now have a basic understand of artificial intelligence. Additionally, you should be able to create a neural network for a dataset using Scikit-Learn and Jupyter Notebook. All that remains is to find some data and create your own models. One place to start is data.gov – a US government site with a variety of data sources. Have fun!

Basics of Artificial Intelligence – VIII

Neural Networks are an incredibly useful method for teaching computers how to recognize complex relationships in data. However, in order to get them working properly, you need to know a little more about how they work and how to tune them. This week, we’ll be looking at the two key settings for Neural Networks in scikit-learn.

What Is a Neural Network?

But before we go into those settings, it’s useful to understand what a neural network is. Neural networks are an attempt at modeling computer intelligence on how the human brain works. In our brains, neurons receive electrical impulses from other neurons and, optionally, transmit impulses to other neurons. From there, the process continues with those neurons again deciding how to act on the signal from the previous neuron. Our brains have an estimated 100 billion neurons, all connected to the network to receive and process data.

In the computer, this same idea is replicated with the Neural Network. The inputs values for the network form the first layer of neurons in the artificial brain. From there, one or more hidden layers are created connecting the inputs from the previous stage. Finally, one or more output neurons provide the user with the answer from the digital brain. Of course, this assumes the network has been trained to identify the data.

So, for the developer, the first step to creating the neural network is to determine the number of layers for the network and the number of neurons in each layer. Next, the developer will select from a group of ‘activation functions’ that will define when the neuron fires. The available options are the logistic sigmoid function (logistic), the hyperbolic tan function (tahn) and the rectified linear unit function (relu). Various other parameters can also be set to further tune the network.

Back to the Code

# Create a Neural Network (AKA Multilayer Perceptron or MLP)
# In this example, we will create 3 hidden layers
# The first layer has 512 neurons, the second 128 and the third 16
# Use the rectified linear unit function for activation (RELU)
# For training, iterate no more than 5000000 times
mlp = MLPClassifier( 
    hidden_layer_sizes=(512,128,16)
    activation='relu',
    max_iter=5000000
)

You can see in the above code that we are going to try with 3 layers in the network. This is simply a guess, and we will want to repeatedly attempt different network configurations until we come upon a model that performs to the required specifications.

# Train the neural network
mlp.fit(X_train,y_train)

# Get metrics 
train_metric = mlp.score(X_train, y_train)
test_metric = mlp.score(X_test, y_test)

pred = mlp.predict(X_test)
recall_metric = recall_score(y_test, pred)
precision_metric = precision_score(y_test, pred)

With the above code, we can retrieve scores indicating how well the model did. With a perfect network, all values would be 1 – meaning they were 100% accurate. However, this is rarely the case with actual data. So, you will need to determine what level of accuracy is required. For some networks, 80% may be the limit.

Armed with this information, you should now be able to repeatedly train your network until you have the desired output. With a large dataset, and a large number of configurations, that may take a substantial amount of time. In fact, the training and testing part of AI development is by far the most time consuming.

What’s Next?

Next week, we will look at the final part of developing an AI solution – the Confusion Matrix. This chart will give us a better understanding of how our network is performing than the simple metrics we calculated above.