The recent Pegasus spyware hack has sort of "opened our eyes" or at least most people are pretending that way! However, if most people were really serious about the fact that privacy should be an ethical right of each individual, their actions would be much different than what they are today.
Firstly, many things have changed between early 80/90s and now in terms of how the software development process itself is perceived and managed. And since software engineering is a rapidly evolving and new science (in the overall time-line of humanity), its a very wrong view to take that all aspects of present processes are objectively better than those of the past.
For one, the extraordinary emphasis on users to constantly update their apps and operating system software is something that is quite recent. Ironically, people hardly used to update their software with such high frequency in 90s or even early 2000s and still managed to keep their systems far more secure - at least going by the number of hacking incidents available in public domain. Of course, one reason attributable to this is a massive increase in number of cyber-criminals trying to compromise people's security, the attack ecosystem has evolved a lot in recent years.
But on the other hand, what is the defense ecosystem doing to counter that? Constantly releasing "security updates" and constantly asking users to update their apps isn't the best way to approach this problem. Security shouldn't be an afterthought but be built into the project right from start. One of the ways to do that is to reduce complexity and feature creep. All software must be designed in order to be robust and secure, security shouldn't be an afterthought. Security updates or patches should be released only when a vulnerability is found (such as the infamous OpenSSL vulnerability).
Complexity is highly antithetical to privacy and security. The more complex a software's design, the more difficult it is to test a software for vulnerabilities and even audit its code. One way to reduce complexity is to keep components separate or decouple them (even at the cost of performance because processing power is cheap but breach of security isn't). In this regard, the move from
systemd is an extremely bad design as the latter's "black box" approach of high complexity requires far more effort on part of software auditors or testers to check for vulnerabilities compared to former. I'm not saying
sysvinit didn't needed an upgrade, it certainly did. But
systemd was the wrong way to go about it. The more you move from simple to complex, the greater is the chance that some shrewd hackers will be sitting on zero day vulnerabilities which you won't be aware of.
Older Windows versions like XP and 7 didn't require such constant updates as the newer Windows-10 requires. The same could be said about older vs newer versions of Ubuntu, Fedora, etc. too. And Android is an absolute mess when it comes to software design! While AOSP is open source, the actual vendors like Samsung, Xiaomi, Oppo, etc. have their own Android versions which are proprietary and closed source. Also, they don't even release constant updates for their software and when they do, they are known to break earlier features and introduce even more bugs! If only Android had followed a simple design like that of Windows or even a Linux distro, it would have been much more secure today.
Going forward, its up to the stakeholders of the software defense ecosystem (FOSS developers, testers and auditors, designers, sponsors and advocacy companies like Red Hat, etc.) to design their systems to be more transparent and keep it simple rather than complex. Of course, as the number of features increase, some amount of complexity is bound to be introduced. Its in the nature of a user to keep asking for more and more unneeded features. But as far as possible, a developer should only implement features to the extent that he/she can keep them secure and less complex.