Internet privacy and security course
About translation
Previous Next

Chapter 111

Open and closed source codes. Errors and situational bugs.

Open and closed source is perceived by users as black and white, good and evil, safe and unsafe which is a misconception. Let’s take a look at things as they really are.This article, a series of articles to be exact, was inspired by a question from a Panic Button user, “Doesn’t the closed source code contradict the security standards you preach in this course?”.

Open and closed source is perceived by users as black and white, good and evil, safe and unsafe which is a misconception. Let’s take a look at things as they really are.Any application consists of code written by computer programmers. Usually, code is made available on special services like GitHub or GitLab: it may be any cloud-based service that runs in a company’s corporate environment. Any code is originally open, but with the so-called closed source applications, it is open for a restricted number of people, usually for developers. The code of open source applications is open for any person.

Here’s a small piece of Panic Button’s code.

код Panic Button

As you can see, by itself code is a string of numbers, letters, symbols, and it can’t be run in an operating system. A string of code should be transformed into an executable file using a compiler: in Windows – this is exe, in Android – apk, in macros – dmg. This process is called “compilation”.

Open source code allows any programmer to independently compile an executable code using source code and, of course, see the code itself. If source code is closed, a user gets only an executable file.

This doesn’t mean that you can’t know anything about such files: you can check network requests, activity in the system, finally, you can decompile code – restore the source code from an executable file. We’ll talk about it in a separate part of this series of articles. So far it looks like open source is a great thing, while closed source is bad. Let’s accept this as true and go through any threat or issue an application may carry.

The threats an application can carry:
- Errors
- Situational bugs (unforeseen responses) - Vulnerabilities
- Backdoors
- Hidden features

Errors

Computer programmers write code – dozens of thousands of code lines, hundreds of thousands of code lines and make it available for download, for instance, on GitHub in a private repository that can be accessed only by the developers who participate in the project. The entire code uploaded by a programmer is usually checked by another programmer who’s supposed to conclude if the written code is user-friendly and, possibly, point out the errors found during the process.

Some projects use additional tools to find errors in code, for instance, PVS Studio. Similar solutions allow you to find a lot of errors, but of course, not all of them. Errors occur and happen to everyone: a freelancing developer or an international giant the likes of Microsoft.
What can errors lead to? For instance, an application may fail to run properly. If this is an antivirus, it won’t be able to delete a Trojan it’s detected or, conversely, will work too diligently and delete your system files. An application can simply pester a user with errors or constantly crash making you restart it over and over again.

During my university years a popular antivirus AVG after a regular update mistakenly identified an important system file user32.dll as a Trojan and, as a result, isolated it. My computer with my term paper simply stopped running, and it took me a lot of effort to find the error and restore Windows. Later the make made an official statement saying that the company overlooked an error that caused such unpleasant consequences for users.

Just recently Microsoft has experienced a similar mishap when Windows 10, following a regular update made to boost the system’s stability, mistakenly deleted all user files. It deleted files from the folder “My documents” – this is where I used to keep my term papers...
Most errors can be found and fixed during the testing stage, however, some errors reproduce only when a user makes a nonstandard action or in exceptional conditions, for instance, if the user’s device has some software or drivers – I call this kind of errors “situational bugs”.

Situational bugs

Usually, a project head or the product’s owner designs architecture, programmers write code while the so-called testers test the product. For testing they also use deployable virtual environments of popular operating systems; the team responsible for testing try to reproduce different situations and check the performance of all functions. Part of these tests is automated, while part of them can be reproduced only manually.

Only after a complete test and the green light from the testing stage, the application goes straight to release, in other words, it becomes available for users.

But not all situations can be reproduced: a user may have unique settings, system features, his own software set, enabled firewall – it is impossible even to imagine all situations, therefore errors will always occur.

When developing Panic Button we also come across situational bugs, causing the release of the application to stall for a couple of weeks. We were surely heading toward release and passed our tests in a virtual environment and PCs with flying colors, and except for a couple of details, the application looked really solid – and then I decided to run the application on an old home laptop I kept in a closet.

I installed the application, checked the activation of panic via a hotkey combination, data clearing, file deletion, notification sending – everything functioned properly, and then I set a logic bomb with default settings. However, when I rebooted the laptop, the system burdened with installed software took so long to run that I just failed to deactivate the logic bomb in time.

Why did we overlook it during the testing stage? Virtual machines aren’t burdened with extra software that runs when the system launches, modern computers are quite powerful and well optimized to quickly load an operating system, but it took an old laptop two minutes to load the operating system and applications with auto-run at startup.

Outdated browsers proved to be another issue we run into. Panic Button, when activated, was supposed to delete the browser history, saved tabs and passwords, cookies, cache and other sensitive information. There are a lot of browsers on the market and our product works with the most popular ones: Mozilla, Chrome, Opera, Edge and Yandex.Browser.

During the testing and development, we kept in mind the latest versions of browsers, of course, and the testing specialists installed the latest versions of browsers. But when the application Panic Button was launched on a computer where browsers haven’t been updated for half a year, not all Panic Button’s features performed properly.

This has to do with the changes in browsers made over this time. For instance, Chromium comes with new files in Sqlite3 format that store session data, while a couple of old files disappeared: they were probably split into minor bases. The latest Firefox version comes with a configuration file profile.ini in the directory AppData/Roaming, and if you don’t delete it with the rest of the profile, the browser won’t start.

As you can see, closed source or open source code has little influence on the prevention of bugs, including situations ones. But the success of their detection is affected by the number and level of testing specialists in a team and number of users: if the application boasts millions of users, the chance of checking different situations and finding the issues above is higher than with application having a smaller audience.

The next part of this chapter will dwell on backdoors and vulnerabilities. Bugs can be annoying, but backdoors and vulnerabilities can pose a real threat to your devices and data.

Previous
4662
Next