32 min read

Second-System Syndrome in Software

Successful V1s tempt teams into bloated V2s. From Netscape 6 and Vista to Copland, Perl 6/Raku, PHP 6, Python 3, Angular 2, Evernote 10, Skype 2017, and Snapchat 2018—what changed, what broke (tech, product, org), and a checklist to dodge the second-system trap.
Second-System Syndrome in Software
Photo by Getty Images on Unsplash

Lessons from History and Modern Cases


Fred Brooks coined the term “second-system effect” to describe how a small, elegant first system is often followed by an over-engineered, bloated second system. Once developers have a successful Version 1, they become overconfident and pack Version 2 with every feature they wished they had before. As one early operating-systems veteran, Victor A. Vyssotsky put it, “It was the familiar second system syndrome. You put in everything you wished you’d had in the other one.” Inevitably, this leads to complexity, delayed schedules, and quality issues. Below, we examine several examples, from famous failures like Netscape 6 and Windows Vista to more recent, under-the-radar incidents, and draw out what went wrong in each case. Each example highlights: what version 1 did well, what changed in version 2, the nature of the issues (technical, product, organizational), and the consequences of the second system.

Netscape 6: A Rewrite that Sunk a Browser Giant

Version 1 Success (Netscape Navigator 1–4)

Netscape’s original web browser (Navigator 1.0 through 4.x) was a pioneering product of the 1990s web boom. It was fast-moving, feature-rich for its time, and quickly gained the majority of browser market share by innovating on HTML, JavaScript, and more. The team’s focus on rapid shipping (releasing Netscape 1.0 in just six months) helped it beat competitors. An early team member noted they were “shipping a finished product in six months or [would] die trying”. This intense focus kept Version 1 lean and effective under tight deadlines.

What Changed in Version 2 (Netscape 6)

After Navigator 4, Netscape decided to rewrite the browser from scratch for a new “Mozilla” platform (skipping a planned version 5). This was an ambitious do-over aimed at a more robust, standards-compliant architecture. However, the rewrite took three years, an eternity on the Internet. During this time (1998–2000), Netscape’s engineers kept adding grand new ideas into the in-progress browser. They threw in an entirely new layout engine, new UI, and lots of redesigns instead of incremental improvements. As Joel Spolsky famously summarized, “They did it by making the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch.” All the while, they stopped improving the old Navigator 4 codebase. Version 2 (branded Netscape 6.0, released in late 2000) was launched with great expectations, but was immediately met with disappointment.

Nature of the Issues

The problems were primarily technical and product-focused. The complete rewrite meant Netscape 6 was slow, memory-heavy, and buggy compared to the polished Navigator 4. Many features that users relied on in version 4 were absent or half-baked in the new version. In Spolsky’s words, Netscape 6 astonished people with “how few features it had” after such a long wait. The over-engineering (multiple subsystems redesigned at once) led to poor performance and instability. Organizationally, the company’s decision to halt work on the stable branch left them with nothing to ship for years. This gap gave Internet Explorer a free runway to dominate. (In fact, some Netscape veterans later admitted “it’s never a good idea to start over and rewrite it”, reflecting on how the V2 effort went astray.)

Consequences

Netscape’s market share plummeted during the long delay. By the time Netscape 6 finally arrived, users had moved on. Most of them had switched to IE, which was steadily improving. The second-system effort effectively killed Netscape as a commercial browser. The company was acquired by AOL and soon discontinued. An open-source offshoot (Mozilla) eventually salvaged the technology and later produced Firefox, but Netscape’s own brand never recovered. The key lesson for developers and PMs is stark: a “big bang” rewrite that tries to do everything can be fatal. Netscape’s first system succeeded by being timely and focusing on user needs, whereas the second system collapsed under its overambition.


Windows Vista (Longhorn): Ambition Overload After a Beloved XP

Version 1 Success (Windows XP/2000)

By the early 2000s, Microsoft’s Windows 2000 and Windows XP (built on the stable Windows NT core) were successful, stable operating systems. Windows XP in particular (released 2001) was praised for its balance of performance and features, strong hardware support, and improved user interface. It became a rock-solid baseline that enterprises and consumers embraced for many years. In short, the first system (Windows NT/2000/XP generation) had evolved cautiously and delivered a reliable OS.

What Changed in Version 2 (Windows “Longhorn” ➡️ Vista)

For the next major release (codenamed Longhorn, eventually Windows Vista in 2006), Microsoft swung for the fences. Vista’s plan introduced massive architectural changes: a new graphical UI subsystem (Avalon/WPF) with 3D effects, a new storage subsystem (WinFS) to replace the file system with a database, a new communication stack (WCF), and a shift toward managed code (C#/.NET) within core parts of the OS. In essence, the Windows team tried to redesign large swaths of the system in one go, a prime example of version-two overreach. Over the first few years of Longhorn development, these teams built a huge amount of code for all the ambitious features. There were few clear delivery deadlines and poor scope control. Without realizing it, the project dug itself into “an incredibly deep hole” of complexity. By 2004, three years in, none of the major new components were near finished, and the OS was sluggish and unstable due to layers of new subsystems (the new features were layered on top of the old, adding huge performance overhead). Microsoft was forced to make a dramatic course correction. They “reset” the project in 2004, throwing out much of the WinFS/Avalon integration. Vista eventually shipped in 2006 with many of those grand features either absent or cut-down.

Nature of the Issues

The Vista saga combined technical overreach with product missteps and organizational churn. Technically, the second-system approach of piling major new subsystems onto a mature OS led to bloated performance and endless bugs. The additive nature of the changes (new managed code layers on top of old native code) meant Vista was slower than XP on the same hardware. The ambitious components (like the revolutionary WinFS storage) proved too complex to deliver on time, which is a classic over-engineering failure. On the product side, Vista also pushed aggressive UI changes (Aero transparency, User Account Control prompts) that, while well-intentioned (e.g., improving security), annoyed users and felt like bloat. Organizationally, the project suffered from feature creep and a lack of focus until the 2004 reset imposed new discipline. Even after the reset, the need to ship meant shipping a system that wasn’t what had been originally envisioned. The resulting Vista was a compromised product. Big under-the-hood changes with not enough tangible user benefit to justify the trouble.

Consequences

Windows Vista’s launch in 2007 was met with a mixed to negative reception. Many users found Vista slow, resource-hungry, and plagued by compatibility problems (drivers that worked on XP initially didn’t work on Vista). It garnered a reputation as one of Microsoft’s biggest missteps, to the point that enterprises skipped deploying Vista and waited for the next version. Internally, the fallout from Vista’s second-system syndrome had lasting impacts. It took Windows 7 (in 2009) to regain user trust by fixing Vista’s issues while dialing back complexity. Some of Vista’s behind-the-scenes improvements (e.g., better security architecture) did benefit Windows long-term, but the debacle also contributed to Microsoft’s loss of momentum during the crucial mid-2000s (when mobile and web were rising). The Vista case teaches that a successful system (XP) sets a high bar, and if version 2 tries to do too much at once, it can collapse under its own weight, requiring painful resets.


Apple Copland: Feature Creep and Project Chaos in a Next-Gen OS

Version 1 Success (Classic Mac OS up to System 7)

Through the 1980s and early ’90s, Apple’s classic Mac OS (System 6, System 7) provided a user-friendly graphical interface that was highly regarded. However, by System 7 (1991–95), it was aging under the hood, lacking modern OS capabilities like protected memory and proper multitasking. Still, System 7 was stable and familiar, and it kept Apple’s Macintosh line competitive through the early ’90s, even outshining Windows 95 in some UX aspects.

What Changed in Version 2 (Copland, the planned Mac OS 8)

Apple’s response to the impending era of modern OSes was an ambitious project code-named Copland. Announced in 1994, Copland was intended as a top-to-bottom rewrite of Mac OS with modern architecture (PowerPC-native, protected memory, preemptive multitasking, a flashy new UI theme system, etc.). In true second-system fashion, Apple marketed Copland as everything a Mac user could dream of. A leapfrog over Windows. But as the project rolled on, it began to slip behind schedule repeatedly. Instead of pruning features to ship on time, Apple management kept adding more to Copland to justify the delays. This led to rampant feature creep. For example, even after an initial developer beta in late 1995, new capabilities and UI ideas were continuously piled in, pushing out the timeline. Despite the massive effort, the project was unstable and nowhere near ready. Apple’s SVP David Nagel, who had promised Copland by mid-1996, left the company as it became clear that the deadline would be missed. Copland was the victim of too many ideas and no realistic focus on finishing a shippable OS. In August 1996, after several painful slips and an attempt to salvage it by shipping incremental pieces, Apple canceled Copland outright.

Nature of the Issues

Copland’s failure was technical, product, and organizational all at once. Technically, building a modern OS from scratch is extremely hard. The team struggled with fundamental low-level problems while also implementing flashy new UI concepts. The constant influx of new features made the codebase a moving target, unstable and unwieldy. This is a prime example of the second-system effect: instead of a conservative approach, the project attempted too broad a revolution in one step. From a product perspective, Apple misjudged what was achievable; they marketed capabilities (like an advanced UI and a fully new architecture) that they then couldn’t deliver, damaging credibility. Organizationally, project management was dysfunctional. Milestones slipped, and there was a lack of a ruthless gatekeeper to cut scope. One senior engineer concluded Copland “could never ship,” which proved true. In short, every possible mistake, like feature creep, lack of discipline, and poor schedule management, befell this second-system attempt.

Consequences

The Copland fiasco nearly sank Apple. By late 1996, the company faced a serious OS vacuum. System 7 was outdated, Copland was dead, and Windows 95 was eroding the Mac’s edge. Apple’s eventual solution was radical. They bought NeXT (Steve Jobs’s company) in 1997 to get NeXTSTEP, a stable modern OS, as the basis for what became Mac OS X. The fallout included large financial losses (Apple posted a $740M loss in 1996 amidst Copland’s turmoil) and a hit to Apple’s reputation in the mid-90s. Yet, the silver lining was that Copland’s failure forced Apple to bring back Steve Jobs and adopt a more mature OS core, the moves that ultimately saved the company. The lesson? Feature creep is deadly. A second system must be tightly managed; otherwise, as in Copland, it can spiral into an “too expensive and unwieldy” mess that never ships.


Multics (1960s): A Grand Second System That Inspired UNIX

Version 1 Success (CTSS and Simple Early OSes)

In the 1960s, MIT’s CTSS (Compatible Time-Sharing System) was one of the first operating systems to allow multiple users to interact with a computer simultaneously. CTSS was primitive by later standards, but it worked. It demonstrated the viability of time-sharing on mainframes, using relatively straightforward mechanisms. Similarly, other first-generation OS efforts at places like Bell Labs (e.g., BESYS) were limited in scope but met immediate needs. These first systems taught important lessons and provided a base of experience.

What Changed in Version 2 (Multics)

The follow-on project was Multics (Multiplexed Information and Computing Service). A collaboration between MIT, General Electric, and Bell Labs starting in 1965. Multics was envisioned as the OS to end all OSes: a multi-user, multi-processing system with dynamic linking, hierarchical file systems, security, and many other cutting-edge features. Essentially, the Multics designers tried to build every conceivable improvement into the new system (far beyond CTSS). This bold vision exemplified the big rewrite trap: Multics was vastly more ambitious and complex than its predecessors. As Victor Vyssotsky of Bell Labs later said, “We were naive about how hard it was going to be to create an operating system as ambitious as Multics”. The effort ran into difficulties, it proceeded slowly, ran over-budget, and by the late 1960s still hadn’t delivered a stable, performant OS. Bell Labs grew frustrated, they observed that Multics tried to incorporate all the “wishlist” features and thus became unwieldy. In 1969, Bell Labs pulled out of the project entirely, essentially declaring Multics too late and too expensive to meet their needs. (GE and MIT continued, and Multics eventually became operational in the 1970s at some sites, but its impact was limited.)

Nature of the Issues

Multics exemplified technical over-engineering. The system architecture was ahead of its time. Many concepts in Multics were brilliant, but implementing them all at once was extraordinarily complex. The project had multiple stakeholders (academic vs corporate goals), which also complicated the scope. The technical challenges (e.g., dynamic memory segmentation, stringent security, etc.) meant slow progress; as schedules slipped, it became clear that the second system was too ambitious. This is the “everything but the kitchen sink” problem: Multics designers added everything they wished the first system had, rather than pruning. Organizationally, coordination between MIT, GE, and Bell Labs was difficult (differing objectives and culture). In short, Multics suffered from classic second-system syndrome: too many features, too complex at once.

Consequences

While Multics itself had limited commercial success, its legacy was instructive. The frustration at Bell Labs directly led two researchers, Ken Thompson and Dennis Ritchie, to go off and create a simpler OS in 1969. That project became UNIX. UNIX was in many ways a reaction to Multics: a third system that deliberately backed off from the second-system bloat. UNIX took some ideas from Multics but implemented them in a far simpler way, and it became enormously successful. The Multics saga thus provided a cautionary tale: it proved Brooks’ point from the same era with OS/360, that second systems tend to overreach. For developers and PMs, the lesson is to temper ambitions for version 2. Multics did eventually run (and introduced seminal ideas in OS design), but as one retrospective noted, “Bell Labs management…came to believe the promises of Multics would be fulfilled only too late and too expensively.” The best ideas survived, but the project as a whole is remembered as a warning about over-ambition.


Perl 6 (Raku): “Rewrite Everything” and a Community Splintered

Version 1 Success (Perl 5): Perl 5

Released in the mid-1990s, it was a highly successful scripting language, especially for system administration and CGI web scripts. It excelled at text processing and had a huge archive of modules (CPAN) that made it extremely useful. Perl 5’s philosophy of “There’s more than one way to do it” gave developers flexibility, and over a decade, Perl 5 gained a devoted community and evolved with regular, incremental updates. In short, Perl 5 was a workhorse first system. Not perfect, but practical and battle-tested, powering countless scripts and web apps through the 1990s and 2000s.

What Changed in Version 2 (Perl 6)

Around 2000, Perl’s creator, Larry Wall, decided to undertake Perl 6, essentially a redesign of the language from scratch. Perl 6’s goals were extremely ambitious: to fix long-standing Perl 5 shortcomings, add many new features (a richer object system, built-in grammars for parsing, concurrency improvements, etc.), and even have an entirely new runtime. Wall himself quipped that the unofficial slogan of Perl 6 was “Second System Syndrome Done Right!”– an acknowledgment that they were indeed attempting a second-system overhaul, but hoping to avoid the usual pitfalls. In practice, Perl 6 took far longer than expected. The language went through a lengthy design process (“Apocalypses” and RFCs) and multiple implementations. One major change was the decision not to use Perl 5’s C-based interpreter at all, but to build a new virtual machine to run Perl 6 code. This proved a massive effort: over the years, several VM implementations were tried (Parrot, JVM, MoarVM), but each only partially delivered on performance and completeness. From 2000 to 2015, Perl 6 was in development, with no stable release, an incredibly long gestation. Essentially, Perl 6 tried to “fix all the problems of the world” (as observed by John Siracusa) in the new design. This epitomizes the “big-bang” rewrite pitfall: the architects saw all of Perl 5’s warts and attempted to solve them in one grand new system. The first official Perl 6 release (“Christmas” version) finally came in 2015, 15 years later and at a point when many former Perl users had moved on.

Nature of the Issues

Perl 6’s saga was largely a technical/architectural second-system story, with community and organizational repercussions. The technical issue was scope explosion: Perl 5 was already complex, and Perl 6’s design was even more complex. The decision to create a new runtime/VM was akin to a full rewrite of the engine, extremely risky. Indeed, as Siracusa noted, this was “part of the second-system syndrome… [they thought,] let’s come up with a virtual machine” rather than reuse the existing C runtime. The result was many false starts and a decade of low-quality or slow interpreters. Meanwhile, from a product/community perspective, Perl 6 diverged so much that it was essentially a different language (so much so that in 2019 it was officially renamed Raku to distinguish it from Perl 5). The long delay meant that by the time Perl 6 arrived, other languages (Python, Ruby, PHP, etc.) had captured the mindshare. Perl 5 itself stagnated for a while because attention was on Perl 6, causing a split in the community. It’s a classic case of the “big rewrite” trap: the new system took so long that the world changed in the interim.

Consequences

The fallout for Perl was significant. Perl 5, once one of the top dynamic languages, gradually declined in popularity, and the long wait for Perl 6 is often cited as a factor. When Perl 6 (Raku) finally became usable, it was indeed powerful. It has many innovative features and fixes many of Perl 5’s issues. However, as Siracusa lamented, “It’s just such a shame that so few people will find themselves with an opportunity to use it.” The second system arrived too late. Perl 6/Raku today has a niche following, while Perl 5 lives on, maintained by a dedicated core, but the broader programming world has mostly moved to other ecosystems. The key lesson, be careful with a “throw-it-all-away and reinvent” strategy. The Perl 6 team had explicit awareness of second-system syndrome (even joking about it) and still fell into the trap of an over-ambitious rewrite that took years. Experienced developers can learn from this that sometimes evolving a system (even if imperfect) and keeping the community onboard is better than chasing perfection that might never ship.


PHP 6: The Phantom Second System that Never Shipped

Version 1 Success (PHP 5 and earlier)

PHP is a scripting language that rose to dominance, powering web pages (especially in the LAMP stack). By the mid-2000s, PHP 5 (released in 2004) was well-established, introducing decent object-oriented features and solid performance for web development. PHP 5’s success lay in its practicality. It was easy to deploy, fairly fast, and had a huge ecosystem (WordPress, etc.). However, it had design flaws and limitations (inconsistent functions, no unified Unicode handling, etc.) owing to its organic growth.

What Changed in Version 2 (PHP 6 attempt)

After PHP 5, the core team embarked on PHP 6 as the next major release (around 2005). The flagship goal for PHP 6 was to finally introduce native Unicode support throughout the language. A major undertaking, since PHP historically treated strings as byte sequences. The plan was to use the ICU library and represent strings internally in UTF-16, requiring significant changes to PHP’s engine and extensions. This was a classic second-system goal: fix a fundamental weakness (lack of Unicode) by redesigning large parts of the core. Unfortunately, the project encountered heavy technical headwinds. Implementing Unicode everywhere proved complex, and a “shortage of developers who understood the necessary changes” made progress slow. Worse, the prototype showed serious performance problems converting between UTF-16 and other encodings (most web text wasn’t UTF-16). Over several years, PHP 6 dragged on with delays. The team began back-porting other non-Unicode improvements into minor PHP 5 releases (so as not to hold up useful features). By 2010, PHP 6 in its envisioned form was abandoned without an official release. In effect, PHP 6 died as a second system that collapsed under its core ambition. The next actual release that went out to users was PHP 7 in 2015, which deliberately skipped the “6” moniker due to the notoriety of the failed project.

Nature of the Issues

The PHP 6 saga was predominantly a technical architecture failure. The goal (universal Unicode) was arguably ahead of its time for a C-based, performance-sensitive language like PHP. It required rewriting large parts of how strings and memory worked. The team underestimated the complexity, only a few people had the expertise, and progress was slow. As delays mounted, PHP 6 also illustrates organizational pragmatism: rather than completely halting progress, they cherry-picked what features they could salvage (namespaces, traits, etc.) and put those into PHP 5.3/5.4. This left PHP 6 as an ever-receding goal for Unicode that was never met.

The second-system effect here was scope creep in the engine. Trying to solve an important but extremely invasive problem in one leap, without a staged plan or enough resources. Product-wise, the impact was contained since no official PHP 6 hit the public, but in the community, there was confusion (books were even published about “PHP 6” based on dev versions, which never became a reality). Eventually, the decision was made to cut losses.

Consequences:

PHP’s core team regrouped and focused on a more attainable win: performance. The next major version released was PHP 7, which brought dramatic speed improvements (thanks to a separate initiative, PHPNG) and cleaned up some legacy issues, but without attempting the full Unicode integration. By naming it 7, they signaled a break from the PHP 6 saga. PHP 7 was very successful, indicating that the community could move forward once the second-system attempt was dropped. Meanwhile, true Unicode handling in PHP was deferred (even today, PHP’s Unicode story isn’t as simple as Python’s or Java’s, relying on extensions like mbstring). The lesson here for engineers is that not every ambitious idea can be delivered in one step; sometimes the “second system” fails so badly it never ships at all. PHP 6 shows the wisdom in Brooks’ advice: build one to throw away. In this case, the “one” thrown away was the multi-year PHP 6 effort, and the eventual success (PHP 7) came by refocusing on achievable improvements rather than an all-encompassing redesign.


Python 3: A Risky (But Ultimately Successful) Overhaul

Version 1 Success (Python 2.x)

Python 2 (especially 2.6/2.7 in the late 2000s) was a widely used, beloved programming language known for its simplicity and large ecosystem of libraries. By 2008, Python had become a staple in web development, scripting, and more, thanks to its clean syntax and “batteries-included” philosophy. However, Python 2 had accumulated some misfeatures (e.g., the division of Unicode vs byte strings, old-style classes, etc.). It was a successful system, but with some growing pains as usage expanded.

What Changed in Version 2 (Python 3)

The Python core developers decided to make Python 3 a major, backwards-incompatible release to clean up the language. Key changes included making Unicode the default for all strings (similar impetus as PHP’s attempt), removing legacy quirks, and a few syntax changes (e.g. print became a function instead of a statement). This was a bold second-system move: they intentionally broke compatibility with Python 2 to “fix warts” and future-proof the language. The expectation (or hope) was that the community would rapidly migrate to Python 3 since it was the future. Python 3.0 was released in late 2008, but over the next few years, adoption was shockingly slow. Many developers treated it as an academic exercise while continuing with Python 2. In hindsight, the Python team “assumed that everyone would make the big switchover immediately” and thus felt it was acceptable to introduce breaking changes. In reality, those changes provided relatively modest immediate benefits to users but imposed a lot of porting work. The incompatibility meant that libraries had to be rewritten or at least 2to3-modernized, and for a long time, many critical packages (especially in scientific computing and web frameworks) were Python 2-only. The result was a near-decade of fragmentation: two Pythons coexisted, with Python 2.7 not reaching end-of-life until 2020. This delay far exceeded what the core devs expected. Essentially, Python 3, as a second system, went through a long period of rejection by its user base due to the costs it imposed.

Nature of the Issues

Python 3’s story is a mix of technical and product/community issues. Technically, Python 3 was successful in implementing its intended improvements (Unicode by default, cleaner syntax, etc.). Unlike some second systems, it wasn’t a buggy mess; the interpreter worked fine. The “syndrome” manifested more in the strategic and community realm. The core team’s decision not to support backward compatibility was a strategic gamble. They introduced many small (and a few large) breaking changes all at once. Users and library maintainers saw Python 3 initially as offering few killer features. One joke was that Python 3 was “the version where print got parentheses”. In other words, from the user's perspective, it felt like a “relatively mild improvement on Python 2” in exchange for a lot of breakage. This perception meant the community was reluctant to invest in migrating code. The nature of the issue was partly organizational: no amount of top-down push could force the ecosystem’s volunteers to port libraries overnight. So, while Python 3 wasn’t an over-engineered technical flop, it was over-optimistic planning. A kind of second-system effect, where the team thought a clean break would be easier than it was. The team ended up having to undertake a years-long campaign to drive adoption (providing tools, writing guides, and even eventually sunsetting Python 2).

Consequences

Initially, Python 3’s second-system “pause” allowed rival languages (like Ruby, JavaScript, etc.) to gain ground as the Python community's growth slowed. However, in the long run, this story ended more positively than most second-system cases: by around 2019, Python 3.x became the norm, and virtually all actively maintained libraries were made compatible. The language has since surged in popularity (driven by data science uses, etc.), vindicating many of the improvements. But it was a tough road, it took about 11 years (2008–2019) for Python 3 to decisively supersede Python 2. The experience yielded lessons for language designers and product managers: if you break compatibility, you must offer compelling benefits and be prepared for a long transition. The Python core team perhaps underestimated that, thinking people would jump on “better Unicode support” readily, but “many people didn’t switch…for what they perceived to be mostly an inconvenience.” In sum, Python 3 was a second system that survived its syndrome, but not without a protracted sophomore slump. It teaches that even a technically sound second system can falter if the migration path isn’t carefully managed.


Angular 2: Framework Rewrite Alienates Its Own Users

Version 1 Success (AngularJS 1.x)

AngularJS (initial release circa 2010) was a popular JavaScript front-end framework that introduced developers to two-way data binding, MVC structure on the client side, and a templating system. AngularJS 1.x succeeded because it made building single-page web applications more approachable at a time when jQuery was the dominant tool. It handled things like DOM updates automatically and had an active community. By 2014, AngularJS had a large user base (enterprise apps, startups, many Angular 1.x apps in production). It was not perfect. There were complaints about performance and architecture (e.g., difficulty in modularizing, memory leaks, etc.), but it was a proven tool that lots of teams were invested in.

What Changed in Version 2 (Angular 2+)

The team at Google decided to create Angular 2 as a complete rewrite of the framework. This was effectively treating Angular 2 as a brand-new platform, incompatible with AngularJS. They introduced a new architecture based on Components, required the use of TypeScript, adopted Reactive programming patterns (RxJS), and dropped many AngularJS concepts. The mantra was “Angular 2 is not just an update, it’s a different framework.” One article noted, “From Angular to Angular 2, (almost) everything has changed.” Indeed, the Angular 2 release in 2016 left existing AngularJS users with no easy upgrade path. You essentially had to rewrite your application to move to Angular 2. This became a cautionary tale of version-two disruption: the authors took the opportunity of a new version to implement all the architectural changes they had dreamed of (better performance via a new change detection, a proper module system, etc.), but in doing so, they abandoned the simplicity and continuity that attracted users. Early in Angular 2’s life, the framework itself was in flux (multiple release candidates with breaking changes, a completely revamped router that went through several iterations, etc.). Developers found the learning curve steep and the migration effort enormous.

Nature of the Issues

The issues were both technical and product-oriented. Technically, Angular 2 was well-engineered in many respects (it was faster and more modular than AngularJS), but it was incompatible and much more complex to set up. The initial AngularJS had a gentle learning curve for simple apps (just include one script and you could start writing AngularJS in HTML). Angular 2 required a build step, TypeScript knowledge, and understanding of a whole new ecosystem (transpilers, modules, etc.), which in 2016 was a lot to ask of teams that had comfortably built apps with AngularJS. In effect, the Angular team optimized for the long-term technical purity of the framework while sacrificing immediate user-friendliness, a classic second-system trade-off. From a product standpoint, the biggest issue was community disruption: thousands of AngularJS 1.x applications suddenly faced a dead end or a costly rewrite. Many developers were upset that Angular 2 forced them to throw away well-working code. The Google team did provide an “Angular 1 to Angular 2 migration” approach (Angular Upgrade library to run both in parallel), but it was complex. The net effect was that some portion of the community decided to jump ship to other frameworks (React or Vue, which were gaining momentum). In HCI/product terms, Angular 2 also changed the programming model significantly, which alienated some who found AngularJS’s two-way binding and controllers straightforward. There was definitely organizational boldness (or hubris) here. Google’s team believed a clean rewrite was needed, even knowing it would upset users. This is reminiscent of platform vendors who think they know best, and users will eventually accept it. Sometimes true, sometimes not.

Consequences:

In the short term, Angular’s popularity took a hit. Circa 2016–2017, we saw stagnation or decline in Angular usage in surveys, while React (which emphasized stability and gradual evolution) surged ahead. Angular 2 (and later 4/5… since they rebranded just “Angular” with semantic versioning) did gradually mature and find its footing. Today, Angular (v10+) is a major enterprise front-end platform. But the second-system approach created a split: AngularJS 1.x continued to be used and maintained for years by those who couldn’t migrate, while “Angular” (2+) forged a mostly new community. Google had to extend the end-of-life of AngularJS because so many projects were stuck on it. The episode offers a clear lesson: a complete rewrite of a popular framework is fraught with peril. The Angular team acknowledged that Angular 2 is a complete rewrite of Angular 1, and developers must essentially relearn the framework. Many devs felt burned. It’s a caution that backwards compatibility and gradual migration paths are extremely valuable in software platforms. Angular’s second system eventually succeeded technically, but at the cost of community goodwill. Product managers should note that if you have a successful Version 1 with a community, radically changing course in Version 2 can create an opening for competitors.


Evernote 10: A Cross-Platform Rewrite that Frustrated Power Users

Version 1 Success (Evernote Legacy App)

Evernote, throughout the 2010s, built a loyal user base for its note-taking application. The “legacy” Evernote apps (versions 5–8, roughly) on Windows, Mac, iOS, etc., while not perfect, had rich features (powerful search, local notebooks, OCR, etc.) that appealed to power users. Each platform’s app was somewhat custom-tailored (the Windows app was a native Windows program, the Mac app a native macOS program, etc.), which allowed good performance and platform-specific integrations. Evernote succeeded by syncing data reliably and offering lots of advanced note organization features. It became an essential productivity tool for many with tens of thousands of notes.

What Changed in Version 2 (Evernote 10 Unified)

In 2020, Evernote undertook a major rebuild of their clients. Evernote 10 was a “second-system” in the sense that they threw out their old native codebases and created a unified cross-platform app (built on Electron and a new stack). The goal was to have one codebase for all devices, enabling faster updates and a more uniform experience. However, this came at a heavy cost initially: Evernote 10 launched missing many features that existing users relied on (for example, certain search operators, PDF annotation tools, AppleScript support on Mac, etc., were absent at launch). Moreover, users reported significantly worse performance. For instance, the new apps were slower to start and used more memory, likely due to the Electron overhead. The Evernote forums and Reddit were filled with complaints that the “new Evernote” was a big step backward. One long-time user and paying customer bluntly stated: “The current v10 is a disaster, especially the abysmal performance on iOS and the missing features on desktop versions that people relied on.” Many found the new UI design less efficient as well (more clicks for common tasks, etc.) Essentially, Evernote’s attempt to reinvent its app platform in one go led to a regression in user experience, a hallmark of the over-engineered second iteration, where the new system isn’t yet as refined as the old.

Nature of the Issues

The Evernote 10 case is a blend of technical and product failures. Technically, switching to a cross-platform Electron app simplified Evernote’s development on the engineering side, but it introduced performance issues (Electron apps can be heavy) and initially couldn’t do everything the old native apps did. The team likely underestimated the complexity of re-implementing a decade’s worth of features. The old Evernote had lots of small conveniences that were missing in the new one. This is a common second-system mistake: assuming you can rebuild “clean” and catch up to all the nuanced functionality quickly. Product-wise, Evernote 10’s launch strategy was problematic: they pushed the new apps to users before feature parity was achieved, which violated user trust. From an organizational perspective, it seems Evernote’s leadership believed a unified app would be better long-term, but perhaps didn’t anticipate the short-term backlash from its most loyal users (power users who knew the old apps inside out). The issues were largely product/UX in nature: slower, and with a “shockingly bad” interface in some respects, according to user reviews. This frustrated the very users who championed Evernote.

Consequences

Evernote faced an exodus of some longtime customers in 2020–2021. Tech blogs and forums saw posts about users switching to alternatives (Notion, OneNote, etc.) because of the new Evernote’s shortcomings. Evernote had to scramble in subsequent updates to re-introduce missing features and improve performance. Over time, they did restore a lot of functionality, but the damage to their reputation was done. The company’s own CEO acknowledged they rebuilt core parts of the product and that initially “not everything is there yet.” In fact, they kept the “Evernote Legacy” application available for download for a while so unhappy users could revert — a telling sign. The Evernote 10 example underscores that a second-system rewrite can alienate your core users if it doesn’t at least match the old system. It offers a lesson in change management, experienced PMs know to never remove core features without a replacement. The organizational lesson is also about understanding your product’s complexity. The Evernote team likely had good reasons to modernize (the old codebases were hard to maintain), but the execution illustrates how easily a second system can end up slower and less functional, at least initially. As of 2023, Evernote is under new ownership, and its missteps with the v10 rewrite are often cited as a case study in how not to do a product overhaul.


Version 1 Success (Original Skype)
Skype became synonymous with internet voice and video calls in the 2000s. Its classic desktop app (and later mobile versions) allowed easy one-on-one or group calls and chats, and it succeeded by focusing on reliable call quality and a straightforward interface. By the time Microsoft acquired Skype in 2011, it had a massive user base. The “Skype classic” (version 7 and earlier) was essentially a communication utility (not flashy), but it was the go-to app for international calls, business meetings, and keeping in touch with family. Users valued that it just worked for calls and messaging, and many businesses even used it as an IRC-like chat hub.

What Changed in Version 2 (Skype Redesign 2017)

In 2017, Microsoft rolled out a dramatic Skype redesign (version 8), clearly inspired by the rising popularity of social apps like Snapchat. They added features such as “Skype Highlights” (a Stories-like ephemeral video feature), emoji reactions to messages, a colorful UI with customizable themes, and a redesigned interface that de-emphasized the traditional contact list and simple chat. Essentially, Skype tried to reposition itself as a trendy social network/messaging platform for a younger audience, rather than focusing on its strength in voice/video communication. This second-system change was not driven by a need to fix technical architecture, but rather a product strategy shift, which can be just as risky. The reaction from Skype’s user base was overwhelmingly negative. Users found the new interface confusing and cluttered with features “no one had asked for or needed”. Core features that heavy users valued (like easily seeing who is online, splitting chat windows, and efficient UI for contacts) were hidden or removed. In chasing a new demographic, Skype alienated its existing one. Microsoft quickly started getting backlash and even had to publicly acknowledge the missteps. Within a year, they announced an update removing or dialing back many of those new additions (Goodbye Highlights/Stories, etc.) and refocusing on simplicity.

Nature of the Issues

This was largely a product of and UX sophomore slump. The Skype team fell victim to a form of feature creep/bloat, not in the engineering sense but in the product design sense. Loading the app with trendy features that weren’t aligned with Skype’s core use cases. Technically, the app still made calls, but the usability dropped: important actions took more clicks, and performance suffered on some devices (the new version was heavier). Organizationally, one can speculate that there was pressure to reinvent Skype to compete with WhatsApp, Snapchat, etc., a strategic miscalculation of audience and competitive arena. The new features complicated the app, increasing cognitive load on users who just wanted to make calls or send simple messages. One could also call this a case of “identity crisis”: the second system forgot what made the first system great. Unlike some technical second-system cases. Here, the implementation of the new features might have been fine. It’s that the product strategy itself was flawed. When a mature product tries to be something it’s not (especially to chase a younger demographic), it can backfire spectacularly, as it did here.

Consequences

The 2017 Skype overhaul is now cited as a classic example of how not to do a redesign. The user backlash was intense. Anecdotal evidence and Microsoft’s own admission showed that virtually no one liked the new version. Microsoft’s Android app store ratings plummeted, and countless reviews begged for the old UI. In response, Microsoft delayed the deprecation of “Skype Classic” (version 7) for a while because so many people refused to upgrade. By 2018, they publicly announced they were rolling back many changes and re-simplifying Skype’s interface. They removed the Highlights (story) feature entirely and toned down the color and frivolities, effectively admitting that the Snapchat-clone strategy failed. Despite these corrections, Skype’s reputation took a hit. Around the same time, competing services like Zoom and Discord were gaining traction. One could argue that Skype’s stumble opened the door wider for Zoom to become the default video meeting app by 2020. The lesson for product managers is clear. Don’t let second-system syndrome tempt you into bolting on every “hot” feature at the expense of your product’s core value. Skype’s users valued reliability and clarity; the second system delivered gimmicks and confusion. Microsoft learned (the hard way) to refocus Skype on what it does best. A lesson applicable to any mature product considering a big 2.0 revamp.


Snapchat Redesign 2018: When a UI Overhaul Loses Sight of the Users

Version 1 Success (Snapchat’s Original Story UI)

Snapchat built its massive user base in the mid-2010s by pioneering the Stories format (temporary 24-hour snaps) and a fast, playful interface for sending photos/videos among friends. The pre-2018 Snapchat interface was admittedly non-standard (swipe-based navigation with minimal labels), but core users — mostly teens and young adults — had grown accustomed to it. Importantly, content from friends (personal snaps and stories) was intermingled in the interface, and celebrity content or Discover media was somewhat less prominent. This formula was working. Snapchat had strong engagement and a devoted user base that liked how the app felt personal and friend-centric.

What Changed in Version 2 (2018 Redesign)

In late 2017 and early 2018, Snapchat rolled out a major redesign with the goal of separating friend content from media content. They moved friends’ Stories to the chat screen and pushed celebrity/influencer content and Discover media into a separate screen. The UI changes were sweeping, “swiping” behavior changed, stories and chats were algorithmically sorted, and the visual look was updated. Snapchat essentially delivered a new version of its app that it believed would be more approachable to new users and more lucrative (by showcasing Discover content for monetization). However, this backfired enormously. The redesign was extremely unpopular with Snapchat’s core users. In the App Store, over 83% of reviews for the update were negative (1 or 2 stars), an almost unprecedented backlash. Users found the new interface confusing, complained that it was harder to find friends’ content, and felt that Snapchat had lost its intimate feel by pushing celebrity content in their faces. Millions of users actually tried to revert to the old version via hacks. A Change.org petition garnered over 1.2 million signatures demanding the old Snapchat back. Even celebrities voiced displeasure. A famous tweet by Kylie Jenner in February 2018 said she wasn’t using Snapchat anymore, which coincided with a drop in Snap’s stock price. In short, the second-system UI was a disaster with the user base.

Nature of the Issues

This was a product/user-experience second-system failure. The Snapchat team appears to have designed the new UI to address certain business goals (make Discover content more prominent to drive ad revenue, and perhaps simplify the app for new users confused by the old design). However, in doing so, they betrayed the expectations of their existing users, a cardinal sin of product design. The new system was over-engineered in a UX sense: it tried to algorithmically sort and separate content in a way that broke the organic workflow users had developed. It’s akin to feature creep, but instead of adding more, they rearranged everything (which can be just as bad). Technically, the app still functioned, but the information architecture was drastically changed, and not in a user-friendly way. The issue was not one of code or performance, but of user psychology and habit. Snapchat underestimated how much its users valued the old interaction model. Organizationally, it seems Snap pushed this change top-down without adequate testing or heed to user feedback. This kind of scenario is reminiscent of the second-system effect, where designers and execs convince themselves the new approach is superior, ignoring the successful elements of the first system that users actually love.

Consequences

The immediate consequence was a hit to Snapchat’s usage metrics. In the quarter following the redesign, Snap’s user growth stalled and even declined slightly. A rather rare occurrence for a hot social app. The company had to respond quickly. By mid-2018, Snapchat issued updates that walked back some of the unpopular changes: for example, putting friends’ stories back on the same page as chats (essentially undoing the separation that users hated). They also added more customization to placate users. While Snapchat did survive this blunder (its user count eventually recovered and grew again, especially after 2019), the episode stands as a warning. It showed that even a trendy app could suffer sustained user revolt if it loses sight of UX fundamentals. As the TechCrunch article noted, Snapchat’s confusing re-arrangement “sparked backlash” and left users “asking to uninstall”. For product managers, the Snapchat saga underscores the importance of user testing and gradual change. A UI that “seems like a good idea” in theory can devastate engagement if it ignores why current users came in the first place. In summary, Snapchat’s second-system redesign was a sophomore slump. An attempt to iterate on a winning formula that nearly derailed the product, proving that more features or new layouts are not always better, especially when they conflict with core user values.


Key Takeaways and Lessons for Teams

From these case studies. Spanning software from operating systems and programming languages to consumer and SaaS applications, a few clear lessons emerge for experienced developers and product managers:

Remember What Made Version 1 Good

In each story, the “first system” had qualities that users valued (be it Netscape’s timely innovation, XP’s stability, Perl 5’s practicality, or Snapchat’s friend-focused UX). The second system faltered when teams lost sight of those core virtues. A successful V1 often succeeds by doing a few things well; a V2 should enhance those strengths, not bury them.

Avoid Over-Ambition. Scope Control is Crucial

A common thread is the feature and scope explosion. Netscape and Copland tried to rewrite everything from scratch (and more), Vista tried to ship three major subsystems at once, Perl 6 aimed to be a perfect language, etc. The result was delay and complexity. The discipline to simplify plans for V2, even deferring “nice-to-have” ideas to V3 or later, can make the difference between a manageable project and a death march. As Fred Brooks warned, the second system is the most dangerous one. Successful teams either resist that temptation or strictly manage it (e.g., Python 3 suffered for years but eventually succeeded by slowly phasing in changes and phasing out the old system).

Maintain Continuity and Backwards Compatibility if Possible

Not every product can be backwards-compatible, but completely breaking from a popular first system carries a huge risk. Angular’s experience shows how a rewrite can fracture a community. Python 3 shows that even with good reasons to break compatibility, it can stall adoption for years. Providing migration paths, deprecation periods, or at least preserving key workflows can mitigate second-system pains. When Microsoft backtracked on Skype and Snapchat reverted parts of its UI, they were essentially restoring familiar elements that never should have been dropped.

User-Centric Design, Not Developer-Centric Wishlists

The “version two” trap often originates from the developers’ or architects’ desires more than users’ needs. Multics and OS/360 loaded up on features that engineers found theoretically appealing. Snapchat’s redesign seemed driven by business goals and a design team vision rather than user demand. The lesson is to stay grounded in solving user problems. Avoid adding features or changes just because “we can” or because they impressed us in brainstorming. Guard against “gold-plating” the new system with capabilities that sound cool but don’t improve the end-user experience proportionately (or worse, harm it).

Incremental Evolution vs. Big Bang Rewrite

Joel Spolsky’s famous admonition “never rewrite from scratch” may be a bit extreme (sometimes a rewrite is necessary), but these cases show the wisdom in exhausting evolutionary approaches first. Netscape might have fared better by enhancing Navigator 4 incrementally. Perhaps Evernote could have modularized parts of its app gradually instead of one large switch. An incremental approach forces you to prioritize the most important improvements and keeps you honest. You have to deliver value continuously. If a rewrite is truly needed, doing it in phases (as Chrome did with multi-process architecture, or Microsoft did by introducing .NET gradually alongside Win32 before a complete OS overhaul) can avoid the “nothing to ship for years” trap.

Test Reality, Get Feedback Early

Many of these second systems suffered from a bubble where the team forged ahead for a long time before reality intervened (often very late, as with Vista’s 2004 reset or Apple canceling Copland). Incorporating reality checks, prototypes, user testing, and beta programs can surface problems early. The extreme negative feedback to Snapchat’s redesign suggests they hadn’t tested enough with regular users. Building a second system in isolation (with either internal enthusiasm or top-down pressure overshadowing external input) is dangerous. Early feedback might have prompted course corrections or a rollback before a full launch fiasco.

In sum, second-system syndrome teaches us to be humble after success. It’s the tendency for version 2 to be bloated due to inflated expectations. The antidote is a combination of user focus, disciplined project management, willingness to cut features, and respect for the value in the simplicity of the first system. Version 2 can absolutely be better than version 1, but only if we avoid the temptations that led the examples above astray. Each failure, from Netscape’s lost browser to Copland’s vaporware OS, became a cautionary tale that seasoned engineers still cite, precisely to help the next generation not repeat history.


Enjoyed this piece?

If this piece was helpful or resonated with you, you can support my work by buying me a Coffee!

Click the image to visit Alvis’s Buy Me a Coffee page.
Subscribe to our newsletter.

Become a subscriber receive the latest updates in your inbox.