code-for-a-living September 2, 2020

If everyone hates it, why is OOP still so widely spread?

OOP has been wildly successful. But was the success just a coincidence? And can it still offer something unique in 2020 that other programming paradigms cannot?
Avatar for Medi Madelen Gwosdz
Content Strategist

In the August edition of Byte magazine in 1981, David Robson opens his article, which became the introduction of ‘Object-Oriented Software Systems’ for many, by admitting up front that it is a departure from what many familiar with imperative, top-down programming are used to. 

“Many people who have no idea how a computer works find the idea of object-oriented programming quite natural. In contrast, many people who have experience with computers initially think there is something strange about object oriented systems.”

It is fair to say that, generations later, the idea of organizing your code into larger meaningful objects that model the parts of your problem continues to puzzle programmers. If they are used to top-down programming or functional programming, which treats elements of code as precise mathematical functions, it takes some getting used to. After an initial hype period had promised improvements for modularising and organising large codebases, the idea was over applied. With OOP being followed by OOA (object-oriented analysis) and OOD (object-oriented design) it soon felt like everything you did in software had to be broken down to objects and their relationships to each other. Then the critics arrived on the scene, some of them quite disappointed. 

Some claimed that under OOP writing tests is harder and it requires extra care to refactor. There is the overhead when reusing code that the creator of Erlang famously described as a case when you wanted a banana but you got a gorilla holding the banana. Everything comes with an implicit, inescapable environment.  

Other ways of describing this new way of solving problems include the analogy between an imperative programmer as “a cook or a chemist, following recipes and formulas to achieve a desired result” and the object oriented programmer as “a greek philosopher or 19th century naturalist concerned with the proper taxonomy and description of the creatures and places of the programming world.”  

Was the success just a coincidence? 

OOP is still one of the dominant paradigms right now. But that might be due to the success of languages who happen to be OOP.  Java, C++ and Kotlin rule mobile for Android and Swift and Objective-C for iOS so you can’t develop software for mobile unless you understand the object-oriented approach. For the web, it’s JavaScript, Python, PHP and Ruby.

Asking why so many widely-used languages are OOP might be mixing up cause and effect. Richard Feldman argues in his talk that it might just be coincidence. C++ was developed in the early 1980s by Bjarne Stroustrup, initially as a set of extensions to the C programming language. Building on C , C++ added object orientation but Feldman argues it became popular for the overall upgrade from C including  type-safety and added support for automatic resource management, generic programming, and exception handling, among other features. 

Then Java wanted to appeal to C++ programmers and doubled down on the OOP part. Ultimately, Sun Microsystems wanted to repeat the C++ trick by aiming for greatest familiarity for developers adopting Java. 

Millions of developers quickly moved to Java due to its exclusive integration in web browsers at the time. Seen this way, OOP seems to just be hitching a ride, rather than driving the success.

What can OOP do that is unique to it?

There are some valuable aspects to OOP, some of which keep it omnipresent even when it has its drawbacks. Let’s look at the cornerstones of OOP.

Encapsulation. This means that data is generally hidden from other parts of a language—placed in a capsule, if you will. OOP encapsulates data by default; objects contain both the data and the methods that affect that data, and good OOP practice means you provide getter and setter methods to control access to that data. This protects mutable data from being changed willy nilly, and makes application data safer. 

Supposedly, it is one of the greatest benefits of OOP. Even though it is most commonly associated with object-oriented programming, the concept itself is in fact separate from it and can be implemented without using objects. Abstraction is a complementary concept to encapsulation here; where encapsulation hides internal information, abstraction provides an easier-to-use public interface to data. In any case, it is not uniquely a OOP feature and can be done with modules isolating a system function or a set of data and operations on those data within a module.

Inheritance. Because objects can be created as subtypes of other objects, they can inherit variables and methods from those objects. This allows objects to support operations defined by anterior types without having to provide their own definition. The goal is to not repeat yourself—multiple uses of the same code is hard to maintain. But functional programming can also achieve DRY through reusable functions. Same goes for memory efficiency. Even though inheritance does contribute to that, so does the concept of closures in FP. 

While inheritance is a OOP specific idea, some argue its benefits can be better achieved by composition. If you lose inheritance, objects and methods quickly dissolve as the syntactic sugar for structs and procedures they are. Note that: Inheritance is also necessary to allow polymorphism, which we discuss below.

Polymorphism. Literally, shape-changing, this concept allows one object or method, whether it’s a generic, an interface, or a regular object, to serve as the template for other objects and methods. There are many forms of polymorphism. A single function can be overloaded, shape-shift and adapt to whichever class it’s in. Object oriented programming tends to use a lot of subtyping polymorphism and ad-hoc polymorphism, but again, this is not a concept limited to OOP. 

Seems like in 2020, there is not so much that OOP can do that other programming paradigms cannot, and a good programmer will use strategies from multiple paradigms together in the battle against complexity. For example, if you look at the tags most often appearing in relation to a question tagged under OOP vs functional programming, JavaScript pops up in both. 

What’s to come?

OOP has, however, been wildly successful. It may be that this success is a consequence of a massive industry that supports and is supported by OOP.

So what about the developers themselves? Our Developer Survey this year shows that they are gaining more and more purchasing influence. Well, if we also look at what developers prefer to work with, Haskell and Scala are among the most loved programming languages. Scala gets you the second highest salary. So maybe with more FP evangelism, they will climb the list of most popular languages, too.

There is some movement though, big companies like Twitter are running their backend almost entirely on Scala code. Facebook who has been recently applying Haskell and many of the major OOP languages are also adopting functional features. .NET has LINQ and Java 8 introduced Lambdas. JavaScript is increasingly functional despite the introduction of classes in ES6. Swift may be the happy medium between an object-oriented and a functional language. So maybe there’s no need to choose: you can have your class Cake and EatCake() it too. 


Special thanks to Ryan, whose great insights and edits helped with this post.

Tags: , ,
Podcast logo The Stack Overflow Podcast is a weekly conversation about working in software development, learning to code, and the art and culture of computer programming.

Related

code-for-a-living September 16, 2020

Neural networks could help computers code themselves: Do we still need human coders?

The next big revolution in coding practice might be closer than we think, and it involves helping computers to code themselves. By utilizing natural language processing and neural networks, some researchers think that within a few years we can remove humans entirely from the coding process. If you work as a coder, you'll be glad to hear that they are wrong.