Three Problems for Remote Workers

Remote Work

Remote work has grown significantly since the COVID-19 pandemic. However, while many young workers are eager to embrace this change, there are numerous challenges workers should consider before they commit to remote-only jobs.

Larger Pool of Competitors

I live in a relatively small community. As such, there is a limited pool of both qualified workers and tech jobs. However, when jobs become remote only, the pool of job seekers can include anyone. Thus, remote workers will need to be far more knowledgeable in order to attract the attention of remote employers. Furthermore, this larger pool of candidates means that employers can seek out value candidates that may work for far less than you’re seeking. In this instance, the business benefits from remote workers due to smaller office costs, better quality candidates, and the potential to pay less. Conversely, the candidate may see fewer opportunities for less money.

Easier to Outsource

Beyond finding candidates in within the United States, remote jobs offer employers the ability to outsource their work to Latin America, Ukraine, Southeast Asia, or other areas with significantly smaller wage requirements. As above, this is actually a benefit for the employer and works against domestic workers finding jobs that pay fair wages. Even if employers choose to keep those jobs in the US, they can find developers in cheaper regions of the country. Because outsourcing can be significantly cheaper than domestic workers, a trend toward outsourcing could threaten entire industries of workers (such as computer professionals) who are accustomed to being well paid in the US.

Work Harder for Recognition

Because remote workers are unseen to many with the organization, they may struggle to gain recognition within their organization. Thus, remote workers may be forced to work longer hours and still be behind their office-working peers. Additionally, since remote workers have fewer opportunities to develop bonds with management and other coworkers, they are significantly easier to fire. Additionally, this lack of bonds as well as a difficulty in receiving recognition will ultimately lead to stagnant wage growth for many remote workers.

Conclusion

While most employees are increasingly fond of remote work, they may realize too late that their employer has gained more than they have. Furthermore, they may find it increasingly difficult to find work in a pool of highly-qualified candidates both domestically and internationally.

The Google Graveyard

Cemetery

Google is a much beloved tech company. Many people are familiar with their products such as Chrome, the Google search engine, and the Android operating system. However, behind these well-known products lies a graveyard of failures and tossed out products. Among the biggest of those failures was Google+ – the short-lived attempt by Google to enter into the social media sphere. However, countless other products have enter’s Google’s Graveyard. Google Hangouts, a once-popular chat application is slated to be shut down in November of this year, and Google Chrome Apps was killed off earlier this year. For a full list, you can visit the Killed by Google website.

Clearly, businesses must determine what products are profitable and which ones are not. Those found to be unprofitable will be cut to free resources for more profitable products. However, this becomes more difficult when Google has such a huge influence in the technology sphere. For example, Google is responsible for Angular, Go, Dart, and Firebase. Each of these tools is used by developers around the world. Should they become obsolete by Google, they may find their way into the Google Graveyard as well.

Even worse, Google has a history of making significant changes to their development tools – changes which often break everything. For example, the change from AngularJS to Angular made the former absolute. Likewise, changes to the Firebase API which migrated from Promises to Subscriptions ruined any previous code.

My concern, as a developer, is that Google’s track record for tossing out old ideas may one day leave programmers without the tools they need. One day, Google may decide to toss out Angular for a new, fancier framework. Or, maybe they discontinue their Firebase service and leave scores of applications without a home. As such, I am now slower to adopt Google technologies, because I’m tired of watching useful tools make it into the Google Graveyard.

Mature Frameworks

As a seasoned software developer, one of the things I most value is mature frameworks. By mature, I don’t necessarily mean old. Rather, I mean a framework that will still be useful tomorrow. Furthermore, that framework should have documentation that will help me today and still be relevant in the future.

Languages like C, C++, and Java provide these frameworks. While Java has had a number of revisions since it was created, code from the 20 years ago will run just as well as modern code. While it may not take advantage of newer features, it is no less viable. Furthermore, books on Java development from 15 years ago can still provide new developers with instruction on programming. In fact, the only part of Java that has really been significantly obsoleted is the windowing frameworks. Likewise, C and C++ code from the 90’s will still run well today, with some minor tweaking.

On the flip side of the spectrum lie the various Javascript frameworks. While Javascript has been around since the beginning of the internet, it seems that frameworks are constantly evolving. Consider the Angular framework. I recently was asked to update an application written in Angular just a few years ago. However, I quickly learned that all of the APIs were now obsolete. For example, the HTTP client had been obsoleted and other libraries wanted to use Observables instead of Promises. To make matters worse, the entire Firebase API was virtually scrapped in favor of an entirely new API.

What makes these changes even more difficult for developers is that the documentation is often poor or non-existent. For example, I attempted to run the newest Firebase API tutorial from Google only to learn that none of the code would compile with the newest libraries. I had to spend an hour looking for a more recent code sample. Try to find a current book on the Angular framework and you will quickly see the challenge.

Unfortunately, this seems to be the status par for Javascript frameworks. Each one has numerous changes and little documentation or training materials for developers. Yet, we continue to use these frameworks. Why? Because Javascript is the defect standard for web development. You have no other choice! If you did, you would certainly use a mature framework that would still compile tomorrow!

The Value of an Idea

What is an idea worth? I am frequently approached by friends and acquaintances with ideas for software applications. Typically, the individual believes that the idea will generate huge revenue, and they will graciously share a portion of the profit with me for my time developing the software. In no instance does the individual suggest that they will help do the development, that’s what they need me for. Furthermore, they do not have the finances to hire a developer, so their hope is that I will invest my ‘sweat equity’ into their dream project.

There are a variety of problems with this proposal. First and foremost, I have bills to pay. As such, I can’t drop paying customers for hope a future payout. Furthermore, these projects typically involve niche markets with a limited number of customers. As such, it is necessary to examine a variety of factors to determine if such an idea really has any value.

To determine that value, you must first determine an estimate of all costs involved in producing the software application. This will include development time as well as a variety of other costs. For example, licenses involved for development, deployment servers, cloud hosting costs, costs to manage the services, etc. Then, costs to market or distribute the application must be considered as well, such as the cost to Apple or Google to distribute on their mobile platforms.

After the sum of all costs is determined, the individual must make a conservative estimate of the number of units sold as well as the anticipated price point. Once that is accomplished, the profit can be determined. Multiply the unit price times the number of units sold and subtract all costs. Once that value is determined, you can have an estimate of the value of a given idea. Sadly, in many instances, that value is well below $0. Thus, many projects are simply not worth the effort because they have no return on investment.

For the developer, it is rarely useful to develop an application based on the future hope of a payout. If you do, you will very likely end up on the losing side of the equation. For the prospective client, if you feel strongly that your idea is profitable, develop a business plan. Then, convince investors of the return on investment they can hope to achieve. Once you have investors, you can then go to a development firm with a far more profitable scenario – pay for the services you want and forget about the free app development.

Technology Degrees Explained

Twenty years ago, technology degrees were divided in two categories: information technology management and computer science. However, during the last decade, universities have developed a variety of new programs to capitalize on the increased diversity of organizational technology needs. For the college student, this means trying to figure out how each of these degrees will help them meet their career objectives.

Computer Science

Computer Science degrees have been around for decades. These degree programs typically involve lots of math and an in-depth study of computer programming, algorithms, and a deep understanding of computer architectures. CS students often study more than one programming language during their program. Students with Computer Science degrees often pursue careers as in software development.

Information Technology Management

Like Computer Science degrees, Information Technology Management degrees have been around for decades. These programs are more aimed at network and system administration. Courses in Windows and Unix management as well as database are common. Additionally, students will often study network routing and switching protocols. IT majors typically manage an organization’s networks.

Cybersecurity

A relative newcomer to the realm of IT degrees, the Cybersecurity degree path is similar to Information Technology Management. However, the Cybersecurity professional has a much deeper knowledge of how to secure networks as well as how to find and exploit vulnerabilities. Additionally Cybersecurity degrees may include additional training on subjects such as law, cryptography, ethics, and risk management.

Data Analytics

Another relative newcomer, Data Analytics degrees mix knowledge of Computer Science with a deeper understanding of data. This understanding of data can then be used to enable the student to create artificial intelligence models for solving complex business problems. Like the Computer Science student, the Data Analytics student will learn programming. However, whereas a CS student may learn Java or C++, the Data Analyst is more likely to learn Python or R as well as a more in-depth study of statistics.

Software Development

Another newer degree, the Software Development degree takes a wide view of software development. While the Computer Science student dives deep into programming and algorithms, the Software Development degree path includes a much broader view of software to include project management, quality assurance as well as network management and databases.

Conclusion

While the number of paths is ever increasing, the truth is that there is a tremendous amount of overlap between each path. Additionally, whether your degree is in Computer Science, Data Analytics, or Software Development, you can find jobs as a computer programmer. However, each path will provide you with a slightly different part of the big picture of computing technologies.

Bitcoin? Seems Like Bitcon…

I’ve never been a fan of crypto currencies. A few months ago, I wrote about some of the problems with crypto currencies. However, since I’m a techie, people always expect me to be paying attention to the crypto market. So, in the fall, I “invested” about $750 dollars into crypto. Like any good investor, I diversified between a variety of currencies. How did I far? Very poorly.

As I write this, Coinbase lists the overall market as down 41% from this time last year. But surely there must be some winners? If so, they weren’t among the 12 coins that I purchased. Bitcoin is one of the worst, and is down 47%. I selected a few that I thought had actual promise because of their utility – Fetch.ai and Internet Computer. Both of them performed even more poorly and are down by more than 60%.

Risks happen in any market, so this should be a surprise. However, crypto currency advocates have insisted that these currencies would protect against inflation and that they would be immune to geopolitical events. Currently, we’re seeing the highest inflation in 40 years and are closer to WWIII than we’ve ever been with Russia’s invasion of Ukraine. As such, crypto currencies have not lived up to their hype.

Now, investment firms are considering allowing crypto currencies to be included in retirement accounts. Why? Because supporters of crypto must be seeing the true nature of crypto’s Ponzi-scheme. They finally recognize that more investors are needed to inflate the value of their crypto “investments.”

Everyday, the evidence becomes more and more clear – it should have been named Bitcon from the beginning…

Software Project Billing Models

Bookkeeping

Regardless of the type of work done by a service organization, two payment models exist: project-based and hourly. Each of these models works better for certain kinds of projects, and each model has pros and cons for both the buyer and the service-provider.

Service Billing

For simple services that can easily be estimated accurately, a project-based rate makes sense. Examples may include the cost of a car wash, painting a room, or changing the oil in a vehicle. In each of these instances, the actions of the service-provider are nearly identical with each implementation of the service provided. Furthermore, the time required is either constant (changing the oil in a car) or is easily measured based on a parameter such as room size (painting a room).

Other types of projects may be more difficult to accurately estimate. For example, gutting and remodeling a bathroom may require an estimate that is highly dependent on how the project progresses. As the contractor moves through the project, unforeseen issues may arise such electrical wiring problems or rotten floorboards that could not be known prior to the start of work.

Software Development Models

While many customers may want a project-based price, the reality is that such estimates are often inaccurate. Much like the wiring issues or rotten floorboards found by a contractor, issues often arise in software development. Furthermore, since every project is unique, developers are often forced to provided what is really nothing but an educated guess into the timeframe.

This reality has been acknowledged by most software companies as they have moved form “waterfall” to “agile” development methodologies. In waterfall, timeframes and budgets are defined before software development begins. However, companies found that these plans were rarely accurate. In fact, a common problem was budget overruns and late project delivery.

To solve this problem, companies moved to “agile” development. In this model smaller pieces of work are performed and deployed over and over again to build an application iteratively. In this model, the customer begins using the software as soon as possible and has the ability to change course as needed. For example, a customer may find that an “essential feature” is really not important once other aspects of the project are delivered. Or, they may find that an essential feature is missing which prevents required functionality.

Software Billing

Given that software quotes can be highly inaccurate, software companies and clients are left to determine how to best bill for software services. In a project-based model, the development firm takes on all the risks of providing an accurate estimate. However, knowing that estimates are frequently inaccurate, the firm will likely pad the estimate considerably to account for those issues. Furthermore, the software firm will have a vested interest in performing the least amount of work to accomplish the client’s vision. This may result in poor, unmaintainable code or buggy implementations as well as the client paying a higher overall hourly rate.

Conversely, if the software company bills on an hourly rate, they may have less of an interest in performing their job efficiently. Instead, they may want to run the clock to bill more hours. While the client has a better expectation of quality code, their bill may be inflated.

Since both models can be exploited by software development firms, it is important to find a developer you trust. Ultimately, I prefer an hourly model. This allows me to change course as the client’s needs change. I have found that customers rarely have an accurate idea of what they want. However, as the project progresses, and their vision comes into focus, clients are able to provide meaningful direction. If I’m forced to provide a project-based price, the client’s feedback will likely be ignored since their changes would constitute a change in project scope – something not allowed in a project-based model.

Conclusion

As a customer, an understanding of both models can help you interact with a software provider. While your first thought may be that the software company is trying to run up the bill with extra hours, any decent developer can provide you a ballpark figure for your project. However, know that such an estimate may be subject to change based on unforeseen problems as well as changes in your requirements as the project moves forward.

The Technology Ride

The world has gone through some amazing transformations during the last half a century. In the early 80’s computers were a rarity and cell phones were a novelty of wealthy business executives. During the 90’s, all that changed with the creation of Windows 95, which was really a pivotal point in the history of technology. Now, for the first time ever, computers were easy enough for the home user to use. A decade later, Apple would develop the iPhone followed by Google’s Android platform which would change the face of technology again. Today, computers and cell phones are ubiquitous.

My Experiences

Being born in the late 70’s, I have been able to witness this transformation first hand. In addition, I have had the incredible opportunity to take part in the creation of technologies myself. My career began in the US Army in 1995 where I served as a member of the Intelligence Community. I learned to use and administer SunOS and Solaris machines, and began my experimentation with programming. It was in this environment that I developed a love for Unix-based systems that continues to this day.

On those Unix machines, I started programming in C, C++, TCL, Perl, and Bourne Shell. While my first programs were pretty bad, I would eventually have an opportunity to write code for a classified government project. That code earned me a Joint Service Achievement Medal as well as making a profound impact within the intelligence community at the time.

After leaving the Army, I entered the civilian workforce to develop Point-of-Sale applications using C++. I would spend nearly two decades developing code for a variety of companies using various platforms and languages. I developed low-level code for phone systems, created custom Android operating systems, and programmed countless web and mobile applications.

Today

Now, I run my own business developing software for clients and creating artificial intelligence solutions. But when I look back, I am always amazed at how far the technology revolution has brought us, and I am thankful that I have had a chance to be a part of that revolution! With 20 years left before I retire, I can’t imagine where technology will take us tomorrow. Yet, I can’t imagine not being a part of that future!

For those who were born in the 90’s or in the new millennium, you will never know how much the world has changed. But for my generation, we watched it happen – and many of played a part in making it happen!

Overview of CompTIA Certifications

A variety of computer certifications exist today. Those certifications fall into one of two categories – vender-neutral or vendor-specific. In the vendor-neutral category, CompTIA is the industry leader. Most well-known for their A+ certification, CompTIA has been around for 40 years and certified over 2.2 million people.

Today, CompTIA issues over a dozen IT certifications for everything from computer hardware to project management. Beyond single certifications, CompTIA also offers what it calls ‘Stackable Certifications’. These certifications are earned by completing multiple CompTIA certifications. For example, earning both A+ and Network+ certifications will result in achieving CompTIA IT Operations Specialist certification.

Hardware Certifications

Individuals who want to work with computer hardware maintenance and repair should start with the A+ certification. This exam covers basic computer hardware and Windows administration tasks. For anyone wanting to work with computers, this exam covers the fundamental knowledge required for success.

Once you have mastered computer hardware, the next step is computer networks. This knowledge is covered by CompTIA’s Network+ certification. Topics in this exam include both wireless and wired network configuration, knowledge of switches and routers, and other topics required for network administration. Note, this exam is vender-neutral. As such, knowledge of specific routers (such as Cisco) is not required.

Security Certifications

CompTIA offers a variety of security certifications for those who wish to ensure their networks are secure or to test network security. The first exam in this category is the Security+ exam. This exam covers basics of security including encryption, WiFi configuration, certificates, firewalls, and other security topics.

Next, CompTIA offers a variety of more in-depth security exams on topics such as penetration testing (PenTest+), cybersecurity analysis (CySA+) and advanced security issues (CASP+). Each of these exams continue where the Security+ exam ends and requires a far more extensive knowledge of security. With all of the security issues in the news, these certifications are in high demand among employers.

Infrastructure Certifications

CompTIA offers several tests in what it calls the ‘infrastructure’ category. These exams are particularly useful for people who administer cloud systems or manage servers. Certifications in this category include Cloud+, Server+, and Linux+. If your organization utilizes cloud-based platforms, such as AWS or Google Cloud Platform, these certifications provide a vendor-neutral starting point. However, if you really want to dive deep into topics like AWS, Amazon offers numerous exams specifically covering their platform.

Project Management Certification

While not hardware related CompTIA offers an entry-level certification for project management called Project+. This exam is less detailed and time consuming than other project management certifications but covers the essential details of project management.

Conclusion

For the aspiring techie or the individual looking to advance their career, CompTIA provides a number of useful certifications. While certifications from other vendors may cost thousands of dollars, CompTIA exams are generally under $400. This is money well spent in the competitive IT world as CompTIA is one of the most respected names in vendor-neutral IT certifications.

Apple vs Android – A Developer’s Perspective

While most applications are developed for both iPhone and Android systems, new developers are faced with the choice of which platform to learn. While both Android and iPhone systems offer excellent apps as well as a variety of sizes, they differ considerably from a developer perspective.

Android Development

For the novice, Android development is probably the easier entry point. For starters, low end Android phones are cheaper to purchase than iPhones. But more importantly, Android developers can use Windows, Linux, or Mac machines for development. So, if you have a computer and an Android phone, you can get started right away.

The language used on Android phones is Java or Kotlin. While Kotlin is the newer language, more resources on Java development are available to get started. Furthermore, once you learn Java, you will find other development opportunities open up to you – such as backend services using frameworks such as Spring Boot.

Once you have learned how to program Android phones, you will find that other devices use Android as well. This includes Virtual Reality hardware such as Oculus, Augmented Reality glasses from vendors like Vuzix, and smart watches.

Publishing to Google is relatively simple too. Once you pay a one-time fee, you are a licensed developer and can create and deploy applications to the Google Play store. While there is some over sight from Google, it is less burdensome than Apple’s requirements.

iPhone Development

iPhone development is a little more complicated. For starters, you will need a Mac machine as the tools for iPhone development do not run under Windows or Linux. Furthermore, both Apple computers and iPhones tend to be more expensive for a small development setup.

While Android’s Java language is used everywhere, the iPhone’s Swift language is far more limited. In fact, Swift isn’t used outside of the Apple ecosystem. So, if you chose to develop other services to integrate with your phone, you will need to learn an additional language.

Unlike Android, few devices run iOS. Thus, your skills on iPhone development will not translate to the ability to program other devices aside from the Apple Watch.

Finally, Apple’s App Store is far more expensive and burdensome than the Google Play Store. For starters, Apple requires developers to pay an annual license fee – which is more expensive than Google’s one-time license cost. Furthermore, the Apple Store is much more strict with requirements for apps and provides significantly more oversight on the app market.

Conclusion

While I think both the Apple and Android phones are excellent, I personally find the Android developer experience to be more positive. This is particularly true for the indie developer or individual looking to learn mobile development.