If everyone hates it, why is OOP still so widespread?
In the August edition of Byte magazine in 1981, David Robson opens his article, which became the introduction of Object-Oriented Software Systems for many, by admitting up front that it is a departure from what many familiar with imperative, top-down programming are used to.
“Many people who have no idea how a computer works find the idea of object-oriented programming quite natural. In contrast, many people who have experience with computers initially think there is something strange about object oriented systems.”
It is fair to say that, generations later, the idea of organizing your code into larger meaningful objects that model the parts of your problem continues to puzzle programmers. If they are used to top-down programming or functional programming, which treats elements of code as precise mathematical functions, it takes some getting used to. After an initial hype period had promised improvements for modularising and organising large codebases, the idea was over applied. With OOP being followed by OOA (object-oriented analysis) and OOD (object-oriented design) it soon felt like everything you did in software had to be broken down to objects and their relationships to each other. Then the critics arrived on the scene, some of them quite disappointed.
Some claimed that under OOP writing tests is harder and it requires extra care to refactor. There is the overhead when reusing code that the creator of Erlang famously described as a case when you wanted a banana but you got a gorilla holding the banana. Everything comes with an implicit, inescapable environment.
Other ways of describing this new way of solving problems include the analogy between an imperative programmer as “a cook or a chemist, following recipes and formulas to achieve a desired result” and the object oriented programmer as “a greek philosopher or 19th century naturalist concerned with the proper taxonomy and description of the creatures and places of the programming world.”
Was the success just a coincidence?
Asking why so many widely-used languages are OOP might be mixing up cause and effect. Richard Feldman argues in his talk that it might just be coincidence. C++ was developed in the early 1980s by Bjarne Stroustrup, initially as a set of extensions to the C programming language. Building on C , C++ added object orientation but Feldman argues it became popular for the overall upgrade from C including type-safety and added support for automatic resource management, generic programming, and exception handling, among other features.
Then Java wanted to appeal to C++ programmers and doubled down on the OOP part. Ultimately, Sun Microsystems wanted to repeat the C++ trick by aiming for greatest familiarity for developers adopting Java.
Millions of developers quickly moved to Java due to its exclusive integration in web browsers at the time. Seen this way, OOP seems to just be hitching a ride, rather than driving the success.
What can OOP do that is unique to it?
There are some valuable aspects to OOP, some of which keep it omnipresent even when it has its drawbacks. Let’s look at the cornerstones of OOP.
Encapsulation. This means that data is generally hidden from other parts of a language—placed in a capsule, if you will. OOP encapsulates data by default; objects contain both the data and the methods that affect that data, and good OOP practice means you provide getter and setter methods to control access to that data. This protects mutable data from being changed willy nilly, and makes application data safer.
Supposedly, it is one of the greatest benefits of OOP. Even though it is most commonly associated with object-oriented programming, the concept itself is in fact separate from it and can be implemented without using objects. Abstraction is a complementary concept to encapsulation here; where encapsulation hides internal information, abstraction provides an easier-to-use public interface to data. In any case, it is not uniquely a OOP feature and can be done with modules isolating a system function or a set of data and operations on those data within a module.
Inheritance. Because objects can be created as subtypes of other objects, they can inherit variables and methods from those objects. This allows objects to support operations defined by anterior types without having to provide their own definition. The goal is to not repeat yourself—multiple uses of the same code is hard to maintain. But functional programming can also achieve DRY through reusable functions. Same goes for memory efficiency. Even though inheritance does contribute to that, so does the concept of closures in FP.
While inheritance is a OOP specific idea, some argue its benefits can be better achieved by composition. If you lose inheritance, objects and methods quickly dissolve as the syntactic sugar for structs and procedures they are. Note that: Inheritance is also necessary to allow polymorphism, which we discuss below.
Polymorphism. Literally, shape-changing, this concept allows one object or method, whether it’s a generic, an interface, or a regular object, to serve as the template for other objects and methods. There are many forms of polymorphism. A single function can be overloaded, shape-shift and adapt to whichever class it’s in. Object oriented programming tends to use a lot of subtyping polymorphism and ad-hoc polymorphism, but again, this is not a concept limited to OOP.
What’s to come?
OOP has, however, been wildly successful. It may be that this success is a consequence of a massive industry that supports and is supported by OOP.
So what about the developers themselves? Our Developer Survey this year shows that they are gaining more and more purchasing influence. Well, if we also look at what developers prefer to work with, Haskell and Scala are among the most loved programming languages. Scala gets you the second highest salary. So maybe with more FP evangelism, they will climb the list of most popular languages, too.
class Cake and
EatCake() it too.
Tags: functional programming, object oriented programming, oop
Special thanks to Ryan, whose great insights and edits helped with this post.
Good overview, but I would go farther in the conclusion. You say “It may be that this success is a consequence of a massive industry that supports and is supported by OOP.” which is true by virtue of it being tautological.
We can do better than a tautology to answer the question posed in the title of the piece by observing that *OOP style directly and deliberately maps well to the typical hierarchical organization of large corporations*. Why do we hate working in OOP? For the same reasons we hate working for inflexible, siloed, process-heavy, policy-heavy, bureaucratic, mutually-distrusting teams in corporations. Why is it a success? For the same reasons that those corporations are successful: organizations with those characteristics have historically been strongly correlated with creation of value that increases the wealth of the billionaire class, and therefore billionaires pay for the creation of those systems.
It should be no surprise that OOP style is popular in corporations; OOP is the reification in code of corporate thought patterns.
You make a good point: software is, generally, structured much like the companies that produce it. But is there anything corporate about OOP specifically, compared to other methods of encapsulation? Would a corporation structured around modules of functional code look any different from a corporation structured around objects?
OOP is like any other paradigm fad. OOP is a useful tool, and initially being OOP-savvy indicated that you were well-versed in the newest and best programming languages. People caught on to this, so OOP became a status symbol. Now everyone wanted to show how OOP-savvy they are and managers wanted their teams to be as OOP-savvy as possible. Like any status symbol, people seek it out a little bit more than they should, ignoring other perfectly good paradigms. As knowledge about other good paradigms is becoming widespread again, the software field is gradually realizing that the best developer isn’t someone who’s maximally OOP, it’s someone fluent in multiple paradigms wgi can mix them to fit the situation.
Whether an organization is open-source or based on money if you want to tackle big problems you need some way to split the problem into small steps. My problem with static-class based-stateful OOP is that in was used as an alternative for “closures”(Smalltalk blocks; nested functions) and this caused code that was more complex than needed. The next problem was inheritance which causes “white-box” code instead of “black-box” code that we like. The circle-ellipse problem is huge in static class based-OOP, but its not a problem in duck-typing OOP or OOP not based on classes. The biggest problem IMO comes from mutable state itself, whenever one method or procedure changes its own state it causes “action at a distance” and to understand what one object or method is doing you have to understand the code of other objects it talks to, one big example is the NullPointerException where the order in which all objects initialize themselves matters. Mutable state in objects causes an implicit context and dependency on the global state that could have been modified by other objects. Even in games which are supposed to be good for OOP, if you change the state of one thing like the position of an enemy you have to remember to notify all objects or you might get problems with coherence. Instead of OOP I recommend Relational (Table-oriented) programming and Reactive/Constraint (automatically notify and update properties all objects if one component or object is changed).
I think you missed the opportunity to add “I think” or “In my opinion” in almost every sentence you wrote.
I think we can safely assume that M. Lippert’s comment expresses what he thinks.
His thoughts carry more weight than most, so it doesn’t need to be said. That’s what being a “thought leader” is.
I would say that the influence of corporations is rather secondary. OOP is more intuitive than FP. You can teach a 5-year-old by utilizing concepts she already knows. Like boxes with buttons and indicator lights. You can put them next to each other, you can put one or many boxes to another box, etc. This will not be a full OOP whatsoever, but it will be the kind of OOP that most programmers use (e.g. to wrap/box imperative statements). A kid will be able to create an “architecture” you will find in much corporate software that follows an intuitive or even naive semantics of “the real world”. On the opposite, FP is based on the semantics of pure mathematics, it is at least one prerequisite to master it even on a beginner’s level. So when we talk about the success of something among human beings it is not about mathematical beauty and efficiency, but rather about saving energy and relieving stress as soon as possible.
I fully agree with you on that, Joseph.
I find this to be an interesting observation. When I learned to program the real world metaphor for me was pipes, valves and filters. Data flowed through functions like I imagined a fluid flowing through plumbing.
I think your statement may also be a good lead into the true challenge of OOP. Learning to model boxes with buttons and indicator lights is fun and seems natural. OOP at the enterprise level is not easy, fun or natural.
Not only is “the influence of corporations rather secondary”, but it is largely deleterious! Corporations really do not know how to make good sofware nor do they care to know.
An extreme case of this is Project Managers following the “grab the mindshare” philosophy espoused by LinkedIn: they deliberately push features out the door before they are ready just to be there first, capturing mindshare. But they either do not know or do not care what this social trend dows to lower both software quality and user expectations for software quality.
@Eugene – Thank you, some common sense at last.
Eric, what you said certainly needs to be pointed out, although it is true in general. Human “thought patterns” give rise to the same results in many situations.
If we don’t like the kinds of creatures that are successful in our econo-systems, we need to change what success means (unlikely), change the rules (difficult), or change how people function mentally, by teaching something beyond Concrete Thinking.
I don’t know how to accomplish that, but it is necessary.
You Hit the nail on the head,I don’t need to say anything else..Great Post
Interesting that you address this tautology but present your own: “For the same reasons that those corporations are successful: organizations with those characteristics have historically been strongly correlated with creation of value”
That’s not a tautological statement.
so what’s that?
lot of butthurt corporate drones in here. Love your response!
If there is demand for scala it’s because there are legacy applications at companies that were written with the now dead, problematic language and it’s difficult to find quality programmers that know the language and still want to write code with it.
Eric, I don’t know where I would be in a world without folks such as yourself. Well, I’d probably be viewed as a lot more intelligent and coherent 😉
Furthest I made it in IT was ‘Software Engineer’ and even that sounded too lofty for what I did day-to-day, but I had a vague grasp of the challenges of my colleagues, the architects, the DBAs (my boss was an All Star DBA turned Manager, he could easily read code, unfuck you on anything SQL, and was a god damn dream to work under).
With that said, after reading the article’s headline, I thought, “well, cuz management”. No one upvotes or gains insight from “well, cuz management”. Much love. <3
“Why is it a success? For the same reasons that those corporations are successful: organizations with those characteristics have historically been strongly correlated with creation of value that increases the wealth of the billionaire class, and therefore billionaires pay for the creation of those systems.”
You’re jumping to conclusions here without any explanation.
1. “organizations with those characteristics [inflexible, siloed, process-heavy, policy-heavy, bureaucratic, mutually-distrusting teams] have historically been strongly correlated with creation of value that increases the wealth of the billionaire class
=> inflexibility and bureaucracy increases the wealth? How is that? I always thought there are countless examples of big companies failing because of inflexibility.
2. “therefore billionaires pay for the creation of those systems”
=> So you think some billionaires decided to push object orientation at a time software developers were weird outsiders, and the software developers wouldn’t have used it otherwise?
Your comment is a bunch of buzzwords without connection, trying to trigger the populist reader.
You make several good points about OOP, but you missed what I think is the most important. This job is about undertanding real life problems and finding solutions for them through programming. The closest paradim to a life made of objects is OOP, and in this regard modeling around objects comes out natural. Neither functional programming, nor structured programming can achieve that except for very especific kind of problems.
I think this all depends on what types of problem you are trying to solve. If I am looking to find the number of widgets used by my company, by week, and correlate that to the mount of rain that feel, then I may be better off focusing on data and functions rather than trying to model widgets, widget containers, rain, rain containers, rain factories that can emit snow or rain or steam, etc
You can’t make a problem simpler then it actually is. I like to respond to claims like this quoting Mike Acton. “Reality is not a hack you’re forced to deal with to solve your abstract, theoretical problem. Reality is the actual problem”
The billionaire class? Oh, like your previous boss Bill and your current boss Mark? Why the larpy, sneering condescension if you’ve obviously benefited greatly from the employment of this terrible class of people?
No condescension or sneering is expressed or intended. I don’t know what “larpy” means.
Your observation that I’ve made a comfortable living by increasing the wealth of the billionaire class is correct, though it is not clear to me why it is germane to the question of why OOP is successful in corporations. Let me take this opportunity to address your point though.
First, jobs which increase the wealth of the billionaire class are the considerable majority of jobs in America; I see no shame in doing well in a system that was created before I was born. Perhaps you have a job which does not increase the wealth of the billionaire class; that’s great. You go.
As a recent study from the Rand think-tank pointed out in stark terms, all Americans who are in the bottom 99% of incomes are missing approximately $50K A YEAR in income that has been transferred mostly to the billionaire class, with some comparatively small distributions to the top 1%. Let’s dig into that a little.
In a typical year my work increases revenues or decreases costs in excess of 4x my compensation, and that goes straight to the bottom line. Because I have numerous privileges and some negotiating power, I get a small fraction of my contribution to that bottom line, a considerable fraction of which is then confiscated in the form of taxes, which are then flown to Afghanistan and literally set on fire.
The vast majority of people working jobs where the surplus value of their labour is captured by billionaires have much weaker negotiating power than I do, and get a much smaller fraction than the few percentage points I get. Forgive me if this seems condescending — none is intended, but we know you see it where it is not intended — but your observation that I benefit by taking home for myself on the near order of 10% of the value I’ve added to the world is perhaps not the strong criticism you had in mind.
I would prefer that incomes be structured in the economy as they were prior to 1974, when rising tides really did lift all boats. Given that they are not, I’m doing the best I know how to ensure that what comparatively small wealth I have accumulated will be used to support the people and causes I care about throughout my lifetime and beyond.
If you have a solution for fixing income inequality in America, revitalizing the middle class, ending poverty, and reducing the political power of the billionaire class, I’m all ears. I do not have such a solution. I design programming languages.
Regardless of all that though, nothing in your sneering rhetorical questions has addressed the substance of my comment. Rather than making weird ad hominem attacks regarding whence I draw a paycheque and whether or not that is hypocritical, why not instead contribute by making a substantive critique? The argument I’ve advanced has numerous points that could be attacked on their merits.
More recently, speaking of billions worth of tax dollars sent to Afghanistan, the Taliban now have it all. And no one in the ruling class is going to be held accountable for it.
Oh, a happy goodwill american again….
I totally praise you with your kindness, thoughtfulness and will for equity, but at the same time feel sorry for watching an innocent STEMful guy’s view of world.
This reminds me of Conway’s Law:
organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.
Why do we hate working with OOP ? it is simple, we don’t. Just some people hates it.
Exactly, and the people who hate it are far more likely to be vocal about it. If one day FP dominates 80% of the industry like OOP does today, we will hear endless complaints about FP as well.
I don’t think everyone hates it I think they hate that it’s shoehorned into situations that it shouldn’t used be used for.
It’s obviously an amazing tool but forcing OOP onto everything is a bad idea. It’s a tool just like a hammer is a tool. You don’t drive screws with a hammer you drive nails with a hammer. You turn screws with a screwdriver.
Just use it where and when it makes sense and don’t use it when it doesn’t make sense.
Exactly! And, even though a language might support OOP; almost every company that I’ve ever worked for so far has had lots of procedural code shoved into “Objects” because they didn’t really understand the concept.
If there is a lot of bad OOP in the world it is because there are a lot of bad OOP coders in the world. Coders are like any technical profession: There are a small number of really good ones, a few ok ones, and a huge number of bad ones.
If you took OOP away from coders everywhere, you would just see a lot of bad code written using other models. Building robust, complex systems is a craft that takes time, skill, and careful thought.
OOP is neither good or bad, it is the brain behind it.
So true, and so it follows that it is a HUGE mistake, and a common one, to believe that changing the programming language will solve any kind of problem you might have with your code base. If you got spaghetti you need to think about how you got it. It just ain’t in the programming language, and chances are there was no billionaire involved either.
You are certainly not wrong, but you better bring your galoshes to any high level vs low level, top down vs oop, vmacs vs why-not-just-have-a-fucking-spell-check-going-tho-Neo? You get the idea. People talking about the ‘efficiency of MUDs’ amidst talks of ‘ray tracing’ and terra flops. Like…. OKKKKKK nerds. Y’all need a haircut, mullets are showing, amirite?
With that said, neutrally, I think it affects everything. We can’t be candid about our philosophies. One person is doing it cuz they saw a movie called Hackerman in 1989, another is doing it because they don’t like they way tech has been made ‘exclusive’, and everyone has different feelings that get lost in Enterprise Software, and Front End Development i.e. Arcade Fire Band Member Roleplay and Back End Development i.e. showering in Cheetoh dust… ETC.
Everything is FUCKED by Hollywood, egos, disinformation.
Personally, I think some of the challenge comes from an understated prioritization of semantic consistence. Kinda like “everyone on the team uses Eclipse, so just use that, plz”… but like… across all of “IT”… whatever the fuck that means now.
tl;dr there are a number of ways to skin the cat, and we all have a way that we like best in our personal lives, and that we have to do in our professional lives. Sometimes they are at odds and we get salty about it. Neither is more right than the other.
I’ve never liked it. Objects should be just that, objects. Objects don’t do things, code does. Object are represented in data structures. Code manipulates those structures to do things.
Right and OOP matches structures to the appropriate code for manipulating them.
With you, and that’s as someone who learned to code ‘OOP’/Java. With that said, this whole thing needs more social workers and behavioral psychologists and neurologists than I can even begin to enumerate.
Have you ever noticed how often people suggest looking at some other, working source code as a way to learn something? Probably a lot. Most industrious people I know view these opportunities as some kind of ‘duh’ moment to stand on the shoulders of giants.
I hate it. I just can’t. The second I don’t feel totally comfortable with a line/function/etc… RIP. I essentially get a mana/ego draining debuff at that point, and I’m like, at best, 90 minutes from calling it for the day. I need to know, from what I perceive to be the relative, abstract ground, up.
“That’s not viable”. No, it fucking isn’t. It so is not viable. THANKFULLY, I excel at kinesthetic learning. Meaning, as a coder, once you get me familiar with my relative, abstract ground, I will take off at full speed and seem like some kind of savant next to the INTJ stammering about a typo in the instructions.
OOP is just gonna make more sense to some minds, and more functional or top-down is gonna make more sense to others. Hopefully, when these people are on a team, they can see eye to eye, and leverage one another’s strengths… but most importantly…
BE ABLE TO READ ONE ANOTHER’S CODE WITHOUT ASPIRIN
Excellent answer to that. Simple and very true
I am a long time Java programmer. I like the language but you really have to use OOP with great care. I find what really works well is keeping Data in their own object structures, keeping Logic well separated from data and to carefully minimize side effects as much as possible. There is a lot to learn from Functional Programming paradigms.
OOP is a tool. No more, no less. It’s an extremely valuable tool that I would not wish to be without, but it’s not some kind of magic. The reason C++ remains my favourite language is that while I can’t deny it’s a bit clunky syntax-wise, it doesn’t force you into the One True Paradigm. Sometimes the best way to write code is ye olde imperative modelle; but sometimes it’s OO, and sometimes it’s generic, and sometimes it’s functional; and sometimes you need to take machine-level reality into account, but sometimes you don’t want to care…
If OOP is your hammer then I guess every problem is your nail. My toolbox is more comprehensive.
I know it’s a little detail but, poly-morphism means, literally “many shaped”, from the greek “poly” meaning many, and “morphe” meaning shape. It does not mean “shape changing”.
Oops, that was my fault and also Gary Gygax takes some of the blame.
The Greek-derived word for “shape changing” is *metamorphosis”; the Latin-derived is “transformation”.
The claim that encapsulation, inheritance and polymorphism are unique to OO is… questionable.
Rich Hickey’s talk Are We There Yet? (https://www.infoq.com/presentations/are-we-there-yet-rich-hickey/) discusses how Clojure breaks down all the individual concepts that OO munges together and makes them individually accessible as a menu one can pick-and-choose from; you can have your type hierarchy for dispatch without the insistence on mutable state, for example.
Getting the pieces as individual tools means they can be used where they make sense, and other tools can be used where _those_ make more sense. More importantly, learning to think about these concepts independently is itself a powerful tool in a programmer’s _conceptual_ toolbox.
Further, you can have polymorphism without inheritance. Role/trait-based objects allow for polymorphism, as does duck-typing.
Yes, you can hve polymorphism w/o inheritance. But that would not be object oriented. You really need the full complement to claim that: encapsulation, polymorphism, inheritance and abstraction.
You can still mess up and end up without object oriented programming, but these four work together very well to enable a conscientious, skilled programmer to write object oriented code.
I second this. Polymorphism is simply sending a message and allowing the receivers vary their response. Inheritance is only required in strongly typed languages, and even then it can be circumvented.
tl;dr instead of bashing OOP maybe you should look at it as Context Scoped Programming.
I don’t blame how painful it is to have to “mock” state in my Rspec tests on OOP, but it is because it rails keeps too much transient information in its Object’s state.
Once I started to think about how can I use pure functions even in my oop code I found a better way to address the pains I felt from just following basic ways of thinking about how I use objects. Is dependency injection OOP? No, but you can use it in OOP. So in short people don’t hate oop, they hate how they misuse oop. I personally find langs like elixir to provide even more tools that address the pains I feel when trying to maintain too much state. That’s my two cents.
“So in short people don’t hate oop, they hate how they misuse oop.”
Best conclusion, thanks!
I think that might also be true for a lot of frameworks I hate.
“Modelling in the world with objects”, encapsulation, and inheritance have existed long before OO. It’s polymorhpism that’s new, specifically the ability to invert dependency relationships between modules of code.
It allows you to take a high level module from depending on a low level module (bad: it’s coupled to implementation details) and inverting the relationship, so that it depends on an interface. The low level modules can then depend on the interface. Once that’s done, the low level implementation details are replaceable, because they’re just plugins in the system.
I’m someone old enough to remember before OOP was a thing. My career started about the time of the August 1981 “Smalltalk” issue of Byte magazine. I can remember reading that and trying to figure out how to incorporate object-ness into our pre-ANSI C-coded embedded systems. I came up with simple object-like library (encapsulation, nearly no polymorphism, no inheritance) that more than halved the development time for the systems we were producing (Brad Cox’s “Software ICs” paper also helped, as did an early beta of “Inside Macintosh”). I continued to apply as much OO-ness as I could to the work I was doing, but it was nearly 10 years before I got to work in an OO language (doing Windows UI work with C++ and MFC).
What OO did well was establish a set of patterns that devs could use for encapsulation – it established a vocabulary and some standards that people could use to describe things. It really did encourage cleaner, more readable code. Polymorphism can be useful. Inheritance too, but used improperly it encourages needless coupling. But the way that OO encourage encapsulation is really what made it into what it became.
The 90s were the apex of OO-everything, with DCOM and particularly CORBA leading the rush to OO systems as opposed to OO programs. For the most part, that was a mistake. OO Analysis and Design popped up at this time, with the various factions in the OOAD wars coalescing into UML. But, again, things like UML are tools that can be used or abused. I’ve spent a lot of time over the years arguing against a purely object-oriented set of design metaphors. You need to use the metaphor, the patterns and the tools that make sense for the problem at hand.
Finally, I got a chuckle from your sentence: “If they are used to top-down programming or functional programming, which treats elements of code as precise mathematical functions, it takes some getting used to.” Traditional “top-down programming” predates the modern use of functional programming by quite a bit (yes, I know that Lisp is as old a Fortran – I was a Lisp programmer before I was a C++ programmer). Top-down programming tends to start off looking like a well-plated meal in a nice restaurant – but by the time the program ships, it more closely resembles a plate of spaghetti served up at a summer camp; it generally doesn’t “[treat] elements of code as precise mathematical functions”.
CORBA is not object oriented. In fact, MSFT had a bad habit of selling software products they claimed were object oriented when in fact, they were anything but! CORBA is only one example. MFC was another.
In fact, companies doing like MSFT did were the biggest problem for the OOP movement: people who say they are doing or enabling OOP but really are not.
This seems destined to be a tedious discussion of little value when most OO languages make it simple to write functional and procedural code.
Why not use the best tool for each job?
Why leave a tool out of your toolbox?
I can’t believe that JSON wasn’t even mentioned in the “Was the success just a coincidence? ” section. It was JSON that bound those programming languages with OOP — at least in the interoperability aspect.
And Json as a format is not more bound to OOP than to functional.
Do not confuse data objects with OOP, data objects exist in many functional languages in one form or another.
OOP is the merger of code into the data objects.
I got that sentence a bit the wrong way around. Fixed. Thanks!
Title Image in this blog tells us some OOP concepts.
Objects are real-world entities,
Class is the blueprint of Objects,
For example, that plant in the title image is an object of a class, its branches are also the objects of the same class, and its sub-branches are also the objects of the same class, and so on..recursively..
Wouldn’t that make it increasingly OOP focused, not more functional?
I’m confused by this line:
Thank you for the thought-provoking article!
Personally, as someone who has programmed in industry, I can see why OOP makes sense; one of my jobs involved debugging and adding features to a codebase of over 50,000 lines of C89. Unfortunately, the core of the codebase was a huge struct with hundreds of members, some of which should be used or not based on a (combination of) enum values and strings, also struct members. I personally found debugging the software quite a pain, not only because many of the most-used functions were hundreds of lines long (the longest was over 5000 lines), but also if a member of the ‘central struct’ did not have the expected value, that was extremely hard to debug; without encapsulation, any function could modify any value in the struct. And since functions that modified the struct were spread over +150 files in the codebase…
Object orientation would not have solved ALL our problems, but it is much easier to encapsulate fields in Java or C++ than in C89 (where it is possible in theory, but hard if a manager with deadlines breathes down one’s neck). Also, OO tends to make it easier to find a function/method that modifies a property, instead of forcing the developer to search among dozens of files – if not more, since languages like Java specify that all functions that modify private members of the class should be member functions and specified in the same file as the file defining the class itself.
Could the same increase in code quality have been achieved with pure, functional programming, by converting the code to Haskell? Possibly, I am not sufficient of an Haskell expert to know that. But I am quite sure that OO, when applied intelligently, could have improved the situation.
Of course, I have seen OOP overused in other jobs, with overly complicated overengineered ‘solutions’. But well, one can also make code hard to maintain by overapplying functional programming (http://jimplush.com/talk/2015/12/19/moving-a-team-from-scala-to-golang/). I agree with some other posters: OO is a tool that is applicable to some situations, using a tool well and not using it in situations where another tool would be more appropriate (like trying to put a screw in a wall using a hammer) is part of professional craftsmanship. The most-loved languages in the survey you mention (Rust, TypeScript, Python, Kotlin and Go) all support some version of OO, but also imperative programming and some degree of functional programming. And I can understand why developers like this flexibility!
Two small points though: while it is interesting to theorize that OOP was just ‘lucky’, basically, it seems that languages developed by people in industry after the 1980s (like C++, Java, Go, Kotlin, Rust) all support some form of OO. Haskell and Scala seem to come from academia. I therefore hypothesize that OO languages were designed by industry programmers with experiences somewhat like mine, namely that OO is useful in the right situation, especially as codebases grow and more and more logic starts to be based on structs, not on single numbers. [of course, OO is also very handy for code completion in IDEs, and dynamically typed languages like Python would have great problems to even do simple things like make a user-friendly “print” function without OO]
Also, I am a bit surprised to see Haskell and Scala mentioned in this post as being among the ‘most-loved languages’. in fact, they are 14 and 15 on the list of ‘most loved languages’ and 11 and 12 on the list of ‘most dreaded languages’, so basically they are liked LESS than average than the average programming language, definitely less than the five ‘OO/imperative/FP mixing’ languages mentioned above.
But overall, thank you very much for your post, thinking about it truly gave me some new insights (on Python, for example)
I agree almost all points with you. OO is useufl when used in large projects. OO can be painful for small projects. OO is a gorilla with a banana when a beginning programmer wants a banana.
Two reasons for me I wasted most of the time with OOP is the fact that when you learn programming in school or go to university you get it taught and as a beginning programmer you believe this is the way to program and the road you should stick with. But then you maybe find out that computer science is a really young field and you tutors are only humans and simply do not know it better and you need to enter a new field like functional programming on your own. Which is really hard and doesn’t have to, if second, we would simply stick to concepts that can be expressed by math that we get taught for 10+ years in school and university. It is really frustrating to see that we try to bring abstract concepts in math to students, they then ask what is it good for? Then you say I like this abstract math stuff and want to apply it to the real world and start studing computer science just to find out that all of this stuff is irrelevant lets just start at zero and learn this OOP stuff until all this math knowledge fades away and it becomes more and more harder to switch to concepts that are based on math like FP.
> Note that: Inheritance is also necessary to allow polymorphism, which we discuss below.
No, it’s not.
1. Dynamic languages have polymorphism without inheritance.
2. Templates (such as C++ and D) allow compile-time polymorphism without inheritance.
3. Typeclasses (such as Haskell and Rust) allow run-time polymorphism without inheritance.
Really, inheritance is both the most recognizable part of OOP, and the least interesting.
Adding to your list… Microsoft’s COM provided polymorphism without any hint of inheritance (via the QueryInterface mechanism)
No one cure for all diseases. Same here. No one programming paradigm for effectively solving all programming problems. All programming paradigms has it’s own niche, be it imperative,object oriented,functional,logical,meta-programming,highly-concurrent or whatever else. Hating or Hyping about one particular language simply doesn’t makes sense. Localize a problem and grab a best tool for fixing it. No more, no less.
This is the answer that makes the most sense. OOP does not make sense in some cases, but does in others. Use the right tool (or methodology) for the job. 🙂
The main problem is that everyone always gets so obsessed about inheritance/polymorphism, which are the least useful parts of OO. The overwhelmingly most important part is that a piece of software should be autonomous: it should do it’s designated task and it should not know about anything else, particularly not about non-related parts of the program. This is exactly how you avoid “tight coupling” and severe bug chains, where you’d modify one part of the program and then break a completely unrelated part. Private encapsulation helps with this.
People who speak of “OO languages” haven’t really grasped the above, which is the core of OO, but also language agnostic and common sense. It’s all about autonomous objects and loose coupling – if people don’t design with that in mind, they are doing it wrong, no matter if they program in 8051 assembler or C#. You can’t really argue against using common sense and avoiding severe bugs + maintenance problems.
As for inheritance and class interfaces, a common problem is that you rarely can predict all future use-cases for derived classes upon designing the base class. So it should be used with caution and you must allow the whole class family tree to get modified in future maintenance – otherwise inheritance might cause cumbersome interfaces. Sometimes inheritance is very useful, sometimes it is completely useless – in either case it’s not something that your whole program design needs to revolve around. Above all, apply the KISS principle when designing.
And it is even more important to focus on “x uses y” relations/dependencies than on “x is a y”. Class charts and UML diagrams etc tend to be all about the latter, often forgetting the former.
I’d suggest that developers are obsessed with inheritance and polymorphism because those are the very first concepts introduced in most OOP books and classes. If developers hate OOP, it’s because most of them don’t get it. What percentage of developers could explain why a switch statement is a code smell and what OO replacement would be? I’d guess maybe 10%.
Could you elaborate more on your last sentence, please? 🙂
I believe the OP is saying that Abstract Classes should be used instead of switch in many cases. For example, a Class of Car with a method to return length might have a switch to check for Ford, Chrysler, Tesla etc. Better is to have an Abstract Class of Car with an Abstract Method for size and have separate classes for Ford etc, which inherit form the Abstract Class. Each of these Classes will override the Abstract Method with their own response.
You still need the switch to create the correct instance though. Abstract classes with inheritance just move where the switch is.
Because many modern programming languages are based on OOP concepts,
Is this a typo?
In the 1980s I was part of the team that developed the first GUI for the IBM PC. IBM said it could not be done, but we did it. OOP was very important to the success of the project . There was no C++, so we implemented it in Lattice C 1.0. It is not necessary to have an OOP language for OOP development, but it does make it easier. In C one just has to implement a vTable as the first member of every object data structure, the same way C++ does it. I won’t go into the details here, of how to make it work, because we have better languages now. I can assure you that our work would have been much harder without using OOP concepts for GUI development.
I’m not having this conversation (been there, done that), but I’m getting a real kick out of the old ads in the issue of Byte Magazine that this article links to. 🙂
I think OOP is tremendously useful, but not (only) for all of the reasons you’ve listed. I think one of the most important results of OOP languages is that it moves the call target out of the function.
Why is this useful?
Well, imagine a C codebase with hundreds of functions and no encapsulation. I have a struct that I want to do something with, but I can’t remember the name of the function. What do I do? I go consult the docs? Or, I impose some regimented naming convention to make things “discoverable”. Can it be done? Yes, but sometimes, especially in an unfamiliar codebase, it can be extremely difficult to get your bearings.
Now, all the other tenets of OOP are useful too, but I think the issue is that the examples used to teach those concepts, especially inheritance and polymorphism, are often silly if not downright bad designs.
For me, the click-bait-y, hyperbolic title detracts from the credibility of this article. I skimmed over it and it seems like this article isn’t thorough enough to make a concise, insightful point. I, for one, don’t even understand “hating” OOP. It’s a pattern, a technique, a thought process. It’s good at some things and not so good at others. They’re all just tools in the toolbox.
Oh geez – another fad. You can do anything badly. So don’t.
I was hearing that functional programming languages were the new / hot / upcoming thing, when I was in college (around 1980). Guess it is still. Or not. At least in some corners. Mostly used Lisp in school.
Yes, I still have my copy of the 1981 Byte magazine Smalltalk issue. Was a huge inspiration.
You can write bad code in any language, using any technique. In my work, I have seen a lot of bad code from others. Fault was not the language or approach.
Over the long run, your average programmer writes bad to hideous code. You need a more skilled programmer to prevent software rot. Over time, the more skilled programmers are pulled off old code to work on new things. So the old software slowly rots. New programmer looks at old / rotten code, and thinks the problem is with the technology. The problem is with the programmers.
To the topic, use everything, as fits the problem. These things are not mutually exclusive.
> If everyone hates it, why is OOP still so widely spread?
I think the reason is pretty obvious: the assumption is wrong. Not everyone hates OOP.
I think the work that Alan Kay’s research group has been doing at VPRI and HARC is awesome, and they seem to absolutely *love* OOP. Or, if you look at Newspeak, you can see how it elegantly solves many problems by being *aggressively* more OOP than Smalltalk.
For example, in Newspeak, just by being “more OOP than Smalltalk”, classes automatically also become modules. And those modules have features that are considered very advanced and are only supported by a small number of very powerful module systems (Racket’s units and ML’s higher-order Functors). Newspeak, in turn, has those features *for free*, just by virtue of being object-oriented.
I also find it quite funny when people point to Scala as an example of “moving away from OOP to FP”. Scala’s FP features are actually pretty much on the same level as Java’s: a function in Scala is simply an object that has a method named `apply`, and in Java, a function is imply an object that has a method named in its corresponding *functional interface*. Where Scala truly shines is in its powerful support for OOP. And similar to Newspeak (and that’s no coincidence, since Gilad Bracha explicitly cites Scala as an influence on Newspeak, whereas Martin Odersky explicitly cites Gilad Bracha’s approach to modularity), Scala also gets a powerful module system by using objects as modules and traits / classes as module definitions.
The reason why Scala comes ahead in most comparisons with Java is *not* because Scala is “functional” and Java is “OO” but because Scala is just much more tastefully designed (in my opinion at least, obviously, taste is subjective).
Besides, Java isn’t even particularly object-oriented to begin with, for example, most Java programmers don’t even know that classes aren’t object-oriented, only interfaces are.
Thinking a bit more about it, I would also like to challenge the second half of the statement:
> If everyone hates it, why is OOP still so widely spread?
I believe OOP is *not* widely spread. I would argue that only a tiny fraction of the code that is deployed in the real world is object-oriented. Most Java, C#, and C++ code I have seen certainly *isn’t* object-oriented. A lot of Python, Ruby, and PHP code isn’t either.
In fact, both Java and C# make it incredibly hard to write object-oriented code, if not plain impossible. E.g. Guy L. Steele has shown that Proper Tail Calls are required to maintain Object-Oriented Encapsulation, but neither Java nor C# (nor C++, Python, Ruby, PHP, not even Scala) support it; ECMAScript specifies it but most implementations ignore that part of the specification.
> only a tiny fraction of the code that is deployed in the real world is object-oriented.
The orgin idea of OOP was to solve the problem of spagetthi code of the procedural programming:
A change to a data structure often leads to change many files and code.
If you take a look on real world java projects (e.g. in github) you will find many
projects that follow data encapsulation (using private fields) but
violates information hiding by using getter and setter.
They still follow the procedural programming: fetch data (getter), calculate, store data (setter).
Is it also not surprising that such projects produces spagetthi code and
are really bad to understand and to maintain.
My static code analysis is to count (grep) the getter and setter per line of a project.
At the beginning i was also blinded from the ideas of polymorphism, inheritance, multi-inheritance, late binding
and the discussion about what are good and OOP-Patterns (GangOfFour).
Information hiding ? Yes, that means private fields !?
and if access is needed, then you just add getter setter, like the standard classes of java !?
Information hiding was to easy to think about it 🙁
But information hiding is it, it is the soul of OOP
Could you elaborate (or indicate a link/reference) on what you mean by “Classes aren’t object-oriented, only interfaces are”
One of the fundamental differences between Object-Oriented Data Abstraction and Abstract Data Types is that two instances of the same Abstract Data Type can inspect each other’s representation, but *no object* can inspect another object’s representation *even if* the two objects happen to be instances of the same type.
In Java, however, two *different* instances of the same class can inspect each other’s representation (private fields, private methods, etc.) Since objects b definition *cannot* do that, it follows that instances of classes cannot be objects. Instances of interfaces, however, can *not* inspect each other’s representation, for the simple reason that interfaces do not contain private fields (or fields at all) or private methods.
Therefore, interfaces define Objects and classes define Abstract Data Types.
A nice writeup is in *On Understanding Data Abstraction, Revisited* (http://CS.UTexas.Edu/~wcook/Drafts/2009/essay.pdf) by William R. Cook, but it has been known for far longer than since 2009.
Exactly. Lots of people actually LOVE OOP and for the record, functional programming is actually older. Scheme for example was used to implement EMACS
I originally thought this article was simple click-bait as I’ve only met a few folks that don’t prefer OODA approach to modern software systems. Of those folks, all lack the fundamentals of what OODA actually is and held tight to functional programming (the only form of programming they understood from a several decades before). Observing the comments, it appears that there are others that dislike OO and I’d bargain that it’s because they don’t properly apply it. But I’ll try not to be presumptuous about the opponents of OO.
I’ve interviewed dozens of folks over the years and typically one question can define whether the candidate understands OO; “when should you use inheritance?”. The typical ‘to reuse code’ is a tell-tale that the candidate simply doesn’t understand the fundamentals of OO. I’d wonder if many opponents of the paradigm fall into that particular category?
Every experienced programmer I’ve worked with over the 22+ years prefers OO, understands it’s alternatives and yet still prefers OO. Perhaps because they apply it correctly, perhaps because they were properly trained in it, or perhaps because they tend to work on sophisticated complex systems that tend to demand encapsulation, data abstraction and such. Small, 100-line scripts and utilities don’t require sophisticated design paradigms.
I’d invite those anti-OO folks to read, and study, the Bible of OO; Object-Oriented Analysis and Design — Grady Booch before disregarding it as an appropriate approach to developing complex, sophisticated and maintainable systems.
Could you please fix your very irritating and blatantly untrue title? I’m not aware of anyone that hates OOP, and I’ve beein doing it for 30 years.
Why people dislike OOP? Is that actually true?
Our industry tends to have some stereotypes and from time to time they suddenly claim to like one approach or another,
there is no silver bullet, each approach has pros and cons, live with it.
Well I think it’s true that OOP is vocally disliked.
I certainly couldn’t say whether the dislike is actually widespread, or whether it’s just a noisy minority.
According to Uncle Bob (Robert C. Martin), only polymorphism is really a feature of OOP. He claims that C had perfect encapsulation with the separation of header files and encapsulation files. Inheritance was also possible by the use of pointers in structs.
I believe the reason why OOP projects go wrong is because of a lack of knowledge, experience and discipline on the part of developers.
Many developers don’t use interfaces at all. Then, when they learn about interfaces, they use them everywhere, even when not needed. Finding the right abstractions and the right level of abstraction is an art and requires experience.
Same thing for Design Patterns. Developers go from being ignorant about them to using them everywhere. An experienced developer only use them when needed and know which one to apply and when.
The practice of Test Driven Development, the knowledge of Design Patterns as well as the knowledge of SOLID Principles and other architectural principles is clearly what’s is missing in many OOP project. It might be due to the fact that half the developers have less than 5 years of experience.
What his very interesting session at Yale about OOP:
Who hates OOP? Are people having anger management issues? Another tagline – if everyone hates death why is still so widespread? because it is inevitable at the moment. (BTW, I don’t think widely spread is the correct syntagm here)?
I don’t hate OOP. In fact, I love it. I am sure that without OOP, programming would be harder and more error-prone.
I was programming in the 80s. OOP was a revelation. When you try to do OOP without a language like Java or C++ it’s a mess.
Before OOP, you had libraries of functions. You passed in “structures”. You had to keep track of which functions were allowed to use which structures.
If you needed a new structure that was similar to, but slightly different from your old structure, you had to copy it line-for-line to the new structure – meticulously maintaining the order of the members.
You *could* cobble together an OOP environment using the C preprocessor and a bunch of conventions. But that is what C++ did for you.
“While acknowledging OOP cornerstones like encapsulation, inheritance, polymorphism” – these WERE OOP in the 80s. If you didn’t do these three things, you weren’t OOP.
I think the writer has it all backwards. We got OOP and were so relieved with the simplicity of the languages. ALONG FOR THE RIDE were resource management, type-safety, generics, exceptions, and memory management.
Try interviewing someone older than 30 for your articles Stack Overflow.
Well, Linus Torvalds is older than 30 and is quite public about how he despises C++ and its particular brand of OO.
Ian Joyner, who is a fan of OO and an OO evangelist, has written scholarly articles arguing that C++ is one of the worst OO languages.
Even Stroustrup, the inventor of C++, admits it has problems. It was not the very first OOP language, but it was the first one to become widespread, so the problems with it showed up far better than in earlier languages like Simula. These earlier languages did not even all support all four of the required ingredients: polymorphism, inheritance, encapsulation & abstraction.
He openly admitted, for example, that making it so highly back-compatible with C made the language a lot uglier than it had to be. It made support for OOP ways more cumbersome than it had to be, too.
Or, to quote Alan Kay: “I invented the term Object-Oriented, and I can tell you I did not have C++ in mind.”
OOP purists always considered C++ to be a bit of a hack.
Another installment in the series of misguided articles denigrating OOP. I am surprised it is published on stackexchange, a network that is itself powered by an OO language. A lot of commenters have already pointed out the usefulness of OOP paradigm in terms of human readability and understanding.
A lot of usefulness of OOP paradigm is beautifully captured in Domain Driven Design dogma.
I’m sure that someday we’ll all be programming in functional languages, using transactional memory on our Itanium processors with ATM networking… not. (I started my career working on ATM, and have a deep cynicism towards over-hyped things as a result)
There are some nice philosophical arguments as to why functional programming is wonderful, but they tend to ignore the real world, which is stateful. (I mostly program operating systems; my son programs cars; both have lots of state) Functional programming is “mathematical” for people whose math education stopped at algebra and never got to calculus – any engineer can tell you that you can’t describe the real world with just algebra.
When you criticize pure functional programming, you tend to get counter-arguments based on “X functional feature is good”, which is kind of ridiculous. It’s sort of like arguing that Cobol is great because “readable names are a good thing”. (“ADD A TO B GIVING C” is anything but a good thing, and deliberately making sure a language has no ability to represent state is just as bad) I’m sure whatever procedural languages being used in a couple of decades[*] will have filter, and map, and list comprehensions, but they’ll also have mechanisms for encapsulating state. [*with non-procedural languages taking over more and more use cases]
A final note – OOP languages were among the first to deliver garbage collection (real, in the case of Java, and fake in C++) to large numbers of programmers – this probably had a big effect on their adoption, and adoption has its own momentum since it’s hard to use a language you can’t hire for. (Well, Perl and Visual Basic did too, around the same time frame, which is probably part of the reason why despite their horribleness they remain far more popular than Haskell or Scala)
OO is simply one means of expressing design in code: “The one thing to remember about OO is that its primary purpose is to encapsulate complexity. You will not be surprised to learn that when engineers have a tool at hand that hides complexity, that they often end up building very complex things!”
That Twitter uses Scala is also unsurprising; Scala is an OO programming language (similar to Java) with some FP features. Rust is likewise very OO in its nature. I’m not sure how suggesting that developers like Scala and Rust is in any way showing a movement away from OO.
tl;dr – This article is ClickBait.
Who said everyone hates OO? I like it a lot when I need to code a relatively large piece of code.
But instead I dont understand why FP suddenly became a holy grail. The ONLY benefit is without side effect such that you can distribute the code to many CPU cores and hence speed up.
Please dont tell me a programmer or human can think about logic in recursive way naturally. Human thinks about concept in abstraction style not recursion.
I do agree that OO is horrible when there are certain incompetent developers who code for a project for a few months without caring the design and then left. When someone starts polluting the code base, without proper code review it becomes a spaghetti, and neither OO nor FP can save the project
One of the most important features of OOP is the ability to properly separate dependencies; Dependency Inversion (the D from SOLID), reversing the dependency from the call-flow. Totally missed from the article.
It doesn’t mention design templates which have been hugely important, particular favouring composition over inheritance.
It doesn’t mention the introduction of Lambdas in C++, C#, Java which paves the way to a mixed approach of Functional Programming and OOP. Again, hugely important.
Pretty light-weight article.
Well, I’m not going to go back to C-like struct mess instead of using OOP (wisely) in C++.
The problem with OOP is not OOP – but rather the developers who use it. OOP is just a tool – it is not a framework nor is it a dictate on how to do things. The problem we as developers face is we become overly focused on the tool, sometimes to the point of being fanatical. OOP came along and after awhile everything had to be OOP – even when there was no a good fit we tried to push that square peg into the round hole, expanding and squeezing the OOP specifications to try and make the tool work around the problem.
Some points have been made above about corporate culture playing a role and this is very true – we make the compromise between “genius” code and maintainable code. The team that has to cater to developer churn has to comply with standards to ensure continuity and cohesion in the team – while this provides necessary structure and stability to the production process it removes from it certain freedoms that stand alone developers have in terms of exploring different ways of solving a problem. To put it another way the team developer in a conformant corporate unit has a bag with a hammer, a saw and a chisel and is asked to make a cabinet. The freelancer can use a drill, circular saw, hack saw, rasp and a bunch of other tools that allow for far more flexibility and refinement.
As I said it is a trade-off. What we will see (as has been alluded to already) and which history has already shown us is “the way” is a merging of toolsets and methodologies. In the same way that physicists are searching for the one unifying theory, developers and engineers are searching for the one paradigm into which any software problem will fit – and we have not found that yet.
For now we mix and match what works. There will still be those new tool that come out that many will be convinced is the tool or paradigm but the nature of the industry and the people in it is to continually search for better ways to do things and so the new things will come and go, be assimilated or forgotten.
OOP is no different. There are some important advantages that OOP gives us that cannot be ignored – it is really about how we use it in conjunction with other techniques and tools that is what is going to determine its future.
I can no see any hate at all, and it would not make any sense. If your mindset is on thing and what they an “do” OO is a quite natural choice. BTW how many really non OO-Languages are in widespread use ( besides C and even there you ofen program quite OO – read struct oriented.)? One might like to tell me waht all-day software in widesprad use is not kind of object-oriented?
I seldom reply to opinionated articles that merely repeat old ideas and provokes discussion in an antipodean manner, but I need to point out one critical aspect missing from both sides of the debate: Virtually nobody is doing OOP today, but rather COP – Class Oriented Programming.
I cannot stress this enough. It has all become about classes and concepts created as a consequence of this. And the focus is horribly adverse towards what really matters in programming – the user. The user cares about objects, not templates of objects and the structure the engineers have designed.
Fortunately there is a way out of this endless struggle. It’s called DCI (Data Context Interaction), a new paradigm in programming, I cannot stress that enough as well, a *new paradigm*. Which means that you cannot use the old “OOP/FP” thinking to understand it, so I advise you to avoid the reflexive “it’s just another pattern” argument. If you truly want to move from classes to objects however, follow the website link, and the user will thank you!
OOP is so succesfull for the simple reason that universe around us is full of cooperating objects.
Today morning I accessed object MyCar using public method unlockAndOpen(Key myKey), then I used setGearbox(“D”) etc etc…
Internal private methods like setTurboBoost() or openEGRValve() simply does not inetrest me (although necessary for car work), I am just user using public methods.
OOP is easy to comprehend and design, if someone does not over engineer it.
But, in alternative universe, maybe there are function MyCarGo(driver) that internally destroys me and creates my new identical copy at the destination place…
And what exactly made you reach this conclusion that everyone hates OOP?
I’ve been a professional programmer for 20 years now. And I’ve done projects in many different areas. Maybe it’s just my area of expertise or maybe it’s coincedence, but 70-80% of the code I write is purely structural. Do A then B then C and if D then do E, otherwise F.
Sure it’s encapsulated into classes, you have a database writer there and a file reader here and you have the usual real-worldly objects like Customer and whatnot. But for me classes are more like useful brackets. Either it’s a tool class that has a number of public methods that don’t really rely on members but are more functional in concept or if a long sequential task has to be done, I have a class with a constructor and one public function and I then can split the task into many subfunctions (all private) and I use member variables so that I don’t have to patch through a variable from the public entry point to the n-th subfunction where I actually need it. If the long sequential task can be divided into separate subtasks I may employ the same strategy downstream. Then the main bracket class just sequentially intantiates a bunch a number of subclasses and calls their one public function.
Of course you have interconnected systems where one change fires an event and that event is caught and processed but even then this often just means that a further task is to be sequentially be worked on.
Some coders do indeed hate OOP. It’s probably those coming from C, PHP and all the other predominantly procedural languages. I’ve dabbled with both paradigms, and OOP was the inevitable, in my opinion.
Good OOP is what all code should strive to look as. Build simple modules that achieve a purpose and are trimmed clean of any unnecessary function. If the modules are well designed then there won’t be any bloatware. If anything, it will enable the developer with flexibility, to build a system that can easily be fractured and provided in its necessary parts (strictly). This makes maintaining and updating the system a beautiful process.
Here’s the problem most coders run into: OOP requires a deep understanding of the system at hand. One can’t develop flawless modules if they don’t truly understand the purpose of the system and it’s sub-systems, that includes any future plans and prospects. You can’t strip the gorilla and the banana down to fur and nails if you’re never going to need to look that far. It’s all about purpose. That’s what keeps modules clean and efficient, and that’s what good code should look as.
Everyone doesn’t hate OOP. Most developers love it. It’s “taken over the world” for a very good reason: it’s the best solution by far that we’ve come up with for taming complexity, in a world where the complexity required by software solutions only ever increases.
When you look into anti-OOP ideas, you’ll always *always* find they’re being pushed by somebody with a functional-programming agenda. FP has been around for longer than OOP, and it’s been failing the entire time, because it’s just not very good. It has a few good ideas, which are easily integrated into a well-designed OOP system; just look at LINQ in .NET, but they’re choked out by all the difficult to understand, difficult to reason about, and difficult to make performant nonsense that has always been FP’s Achilles’ heel.
So FP advocates do what people who fail at fair competition always do: go into politics. They lie. They point at flaws in some specific language, (Java and C++ are frequent targets,) claim they’re flaws with OOP in general, and then assert that therefore we should be using FP instead because it doesn’t present this specific problem, disregarding the twin facts that 1) the problem they’re presenting also doesn’t exist in better-designed OOP systems and 2) FP has a host of serious problems of its own. (Which also don’t exist in well-designed OOP systems.)
Everyone (within epsilon) *loves* OOP. Please don’t go spreading around lies and propaganda being pushed by people who have been abjectly failing in the marketplace of ideas for longer than virtually all of us have been alive.
I’ve never heard an effective programmer question OOP itself. Having experienced the “gorilla with a banana” issue on many occasions in the last 30 years I can say without any reservation that it’s not a problem with OOP but an issue with your architecture, or someone else’s architecture. The code or library wasn’t designed correctly or it was designed in an all encompassing do everything and do it our way. Using OOP in programming doesn’t guarantee a good result. You still need good design.
Interesting article that omits at least one elephant in the room: SQL. The SQL standard had virtually no provision or requirements for OOP concepts until structured types in SQL 1999 and still today not all major vendors support that and none supports it fully, I believe. Yet its easy to think of a table as an object and a function as a method, but the two are not usually bound together, with triggers and structured types being possible exceptions. But then SQL offers a kind of declarative programming which is neither oop nor strictly imperative and not fully covered by those paradigms or the functional paradigm for that matter.
Yet SQL has been with us now for 50 years and along with COBOL constitutes a large bulk of code running commerce and governments world wide.
I think the question is wrong. Old procedural code did not scale and OOP promised something that did, but failed to deliver and was destroyed totally by Giuseppe Castagna decades ago as theoretically unsound and almost completely useless for any application (the so-called covariance problem makes methods useless for handling any problem with two variant arguments, which is almost every interesting problem).
The question is why OOP is still taught by those that should know better and why researchers still waste time trying to invent a perpetual motion machine. The answer is probably that the other alternative, functional programming, is even more useless.
The algebraic framework which has the desirable properties is over three decades old but it is very hard for even professional mathematicians to understand, and has never been modelled successfully in a programming language. Current attempts to do that are failing in languages like C++ and Haskell alike. Category theory is just plain hard, and humans always take the easy way out, the path of least resistance: make a huge mess but we do not care as long as everyone else is making a mess too.
“If everyone hates it…”
Look up “False premise” on Wikipedia.
_ you wanted a banana but you got a gorilla holding the banana._
That’s a monad.
Can anybody elaborate, or redirect me to elaborations, on how concepts from OOP and FP facilitate (on their ways) tasks like creation, test, documentation, fix, improvement and reuse ? In the daily life of programmers, what can be different, easier/harder, which depends on the used concepts ?
I appreciated the point about “intrinsic qualities of a programming language has not to be confused with its historical success”, but I would like to go deeper on the side of “intrinsic qualities” and, if possible, “objective qualities”, which are proven and commonly admitted.
The title is absolutely exaggerated. I could claim “nobody hates OOP”, or “Everyone loves OOP”. Where is my proof? Where is your proof? OOP is still necessary for many reasons. In some fields you just want to use plain C for the sake of simplicity. But a 50.000 page long code is not practical for anyone without OOP.
OOP started with Simula (a language for simulation. I.e. modeling the real world), continued with Smalltalk, then C++. I think one aspect is often not mentioned: the packaging of functions with data definitions. Procedural/functional languages do not have this (although they do have “information hiding”). It is this grouping into classes that allows OO languages to model the real world so easily.
“the packaging of functions with data definitions.” — but that is exactly what OOP’s encapsulation is! Keeping methods and the data they maintain as state within objects is how encapsulation encourages reusable code and prevents large classes of bugs.
Why, even before anybody had heard of either OOP or ‘encapsulation’, computer scientists like Wirth were writing books like “Algorithms + Data = Programs”, showing how encapsulation could help in programming tasks.
I still have this book and refer to it often.
It doesn’t have to be OOP or (1-OOP). Take a look at what the (awesome) Julia language offers: multiple dispatch. There’s a nice example of multiple dispatch starting at about 5:30 : https://www.youtube.com/watch?v=kc9HwsxE1OY
“Like” and “Dislike” are not applicable the the real issue. OOP is a tool just like any other programming construct. It has appropriate uses and inappropriate uses. The tool needs to be selected for the job. OOP lends great advantage to large scale tasks. It adds unnecessary complexity to simple tasks.
If you don’t know what is is for or how to use the tool, leave it in the toolbox.
I’ve been involved with OOP at various levels of complexity for the past 20 years. I’ve seen OO spaghetti many times, and I’ve rarely seen a solid OOP implementation that was maintainable, extensible, easily debuggable, etc . I think this is caused by the following: 1. Programmers who had OOP drilled into them in college and can’t effectively think outside of that box. 2. All of the books that promote complex OOP architectures and concepts where they have absolutely no business. 3. The overuse of inheritance. 4. Developer narcissism – people who gravitate to the most complex solutions because they want to prove how smart or hip they are.
OOP is valuable to be sure, but has been used in many deleterious ways. I’m reminded of Joel Spolsky’s seminal blog about Architecture Astronauts. The problem identified by Joel remains the single biggest issue in software development today.
I think for large complex systems it’s best to use OOP. However, if you are working on a small project it might not be the best choice. There are many things to consider, but variable scope can be a nightmare without the protection that OOP provides.
The textbook (UML user guide) from my college days summed it up best… if you want to build a dog house for your dog… just go to the hardware store, get some nails, lumber, and a few tools. By the end of the day you’ll have a great doghouse and your dog will love it. It may leak a little, but your dog probably won’t mind. if your dog isn’t happy, you can always get a new dog. However, if you use the same approach for building a house for your family… it most likely isn’t going work out very well. They won’t tolerate a leak. Thus, authors stressed using models to identify use cases and pattern reuse. You’ll need some blueprints and definitely you’ll want your house inspected. You don’t want leak in your house or your code!
The dilemma is that many organizations want to use the dog house approach for everything.
After four whole generations of programming languages, what have we learned?
They’re all exactly the same. Coding is a matter of abstraction. We abstract real life objects and situations into variables and functions.
So what makes one generation of languages different from the other? Each one is a product of its time and the data abstraction needs posed by said time.
The first generation, machine code and assembly, arrived when computers were still novel and applications were more simple. The code focused on instructing the machine to automate a set of simple tasks.
The second generation, C, arrived when data needs increased and applications became more complicated. C took care of most of the machine-specific details and allowed coders to focus on abstraction (problem solving). C simplified programming.
The third generation, Java, arrived well in time with the computer revolution. Computer systems could already be found everywhere, and the web was really starting to take off. OOP took a simple idea, that modularization is the easiest way to solve any given problem, and standardized that idea amongst coders. Modularization wasn’t at all new to developers, there were many apps written in assembly that were already modular in design. The problem was that procedural languages rendered modularization a hell of a hassle. It simply wasn’t practiced as best and as often as it should have. But the world needed modularization and its simplicity because of the ever increasing integration and reliance on computer systems. OOP then came to the rescue by spawning its own set of languages, 3rd gen languages, that easily facilitated modularization of code. It essentially made computer programming more available to the public by making it easier.
Is there any competition between OOP and procedural programming?
There shouldn’t be. Both paradigms are equally capable. OOP being the next generation in line simply made coding easier and pushed programmers towards high level abstraction.
The fourth generation, SQL, is on a whole different league of its own. Some think it’s just for databases, but data and manipulation of said data is exactly what programming is about.
The way it goes now is that we type up millions of 3rd gen lines, we add in a few thousand 4th gen lines, and if push comes to shove we throw in a few segments of 2nd gen code for optimization and compatibility with low level sysyems. It all however must compile to 1st gen code in the end – ready to load up on RAM.
The QWERTY keyboard was deliberately designed to be the worst possible configuration for use by normal humans but it’s ubiquitous, despite a valiant attempt to replace it in the 1980’s (such as the Apple II Dvorak alternative). According to me, OOP programming rates right up there with the QWERTY keyboard.
I do agree with most of the comments that OOP is not hated in general. Maybe functional languages are a bit of a hype right now, and most of these do not have OOP features. Just because a lot of people leave stereotypical OOP languages does not mean they hate them.
BTW, functional is not the opposite of OOP. The opposite of OOP is procedural and the opposite of functional is imperative. Scala shows that an OOP language can have functional programming for the implementation of their methods.
There is one thing in the article I have to disagree with: Inheritance is not a marker of OOP. Subtyping is a marker of OOP and inheritance is most often the implementation of this concept in a specific programming language. Prototypes or copying from an existing object are other ways to implement subtyping.
I think one of the major reasons (at least in my line of work) that OOP is so prevalent is that it is currently the best way to write GUIs. I love using Qt to write my GUI. I am not sure, though, if I would like to write a library like this. Strong adherence to every single OOP principle all the time is quite tedious.
On the other hand I do hate sticking perfectly to encapsulation all the way through. And with this part of OO design principles I strongly disagree. If the only thing for a setter/getter is to access a member variable there is no point in writing them. Some OOP languages even lets you have the compiler write the setter and getter automatically and use assignment syntax to call these. Syntactically, you don’t have setters and getters anymore. So, why would you enforce them everywhere? I have seen academic examples where you could switch out the implementation of the class entirely. Or you could add logging to every getter/setter. However, I think this is too much overhead for the small possibility that it might happen in the future. Maybe, C++ has spoiled me because with operator overloading you can still replace your member variable with an object that does extra getter/setter stuff when used.
To sum it up: I hate overusing OO principles – but I do love OOP.
No one experienced in REAL-WORLD non-trivial/non-niche software development will say anything bad about OOP.
OOP allows us to express code in a natural manner close to how the real world looks and works: we have objects that have both properties(data) and behavior(methods/functions) We have blueprints(classes) which contain both properties(data) and behavior(methods) from which we create objects. We also have “generic blueprints”(abstract classes) which serve for the creation of other more specific blueprints(classes that extend abstract classes and iherit their properties/behaviors and possibly modify them here and there – as when one is tuning a car). We have polymorphism which allows us to use/process a common property/behavior of different objects that share some common feature – for example like when we can scan bar codes of different products – despite products being different they share a common feature – the bar code which is created according to some common rules to all products. Interfaces are just like interfaces of electronics – we have a panel with some buttons but the details of what happens when their are pressed are hidden from the user – same happens with interfaces in oop(though interfaces can have some additional meanings)
As you can see OOP allows us to express code in natural way which mirrors the real world with its objects and actions.
OOP is an evolution of procedural programming – which in its turn was created to allow us express code in a more natural manner to us humans than assembly – which is designed around how the computer works, not around how we think, communicate, understand the world. Code is for us programmers(human beings), not for the computer which doesn’t know anything other than machine code – the thing that compilers turn our code into. Some people seem to forget that(if they ever were aware of it)
At the other end of the spectrum we have the trash bin of software development: the so called “functional programming” which is the product of a pseudo-intellectual effort from mathematicians who can’t learn actual programming.
It’s not even an actual paradigm, as it doesn’t have any defining characteristic other than ugly and unreadable syntax and likewise concepts and terminology – which can’t possibly lead to maintainable code, often even getting some simple program to work properly is a major challenge.
FP adepts will like to talk about things like “immutability”, “referential transparency” and the like, until we get into details. Like when they realize that you can’t create any useful program without mutability – that when they’ll change the tune and say that we must have mutability but it must be “controlled” – yet they won’t be able to provide a commonly agreed and precise way to do that, at which point something called “monads” might come into play – and they will hilariously struggle to explain what it is although they themselves don’t really understand the concept(which is not very sensical anyway) – then they’ll claim one has to learn some advanced abstract algebra to understand it – so that they hope they can get away with not being unable to explain that(it doesn’t really work). FP adepts like to think of themselves as “smart” yet when challenged to write useful software in a language like Haskell(allegedly a general purpose language/platform) they try to find excuses and run away from the task. 😀
“Everybody hates OOP?” Simply not true, but a good attention-getter for an article. I agree very much with the commenters who think that claim is, um, *way* overstated.
Yes, it is sometimes the case that OOP programs are not easily maintainable. In my experience, that fault often stems from overuse of inheritance, as has been noted in this discussion, particularly the creation of long chains of itty-bitty classes that individually do not add much. If a class is merely the parent of a subclass, if it is not going to be instantiated and used itself, then maybe it does not need to be in the hierarchy.
OOP is a tool like any other, and can be used well or badly. Slavishly following the OOP lessons that one learned in school may not result in code that can be understood by a new generation of maintainers. But using OOP as a tool to subdivide complexity into smaller pieces that *can* be understood, developed, extended, maintained — that is a great boon to programming.
Over the years, I have found that OOP helps me organize my intentions and the actions of my code, and to isolate data and functions, and all of those reduce complexity and make debugging dramatically easier. I don’t think that it is reasonable to attack a large programming project without at least a serious OO analysis to guide the structure, and probably OOP languages and tools to assist with the coding.
For instance, I just finished a large, multi-year research project in which the directions of the research changed dramatically over the many months. True to the old maxim, we didn’t know what we wanted until we got what we asked for. Every time we found an answer to a question, another question arose that required some additional coding, some recoding. Few modules remained unaltered for more than a few months at a time, but some did. The few that did were ones that represented well-understood objects, and isolated them (data and function) from changes in the business logic due to new requirements. If the edit history of a class module had a year of dust on it, I considered that a real success.
(Approximate magnitude of the project: 40 classes, 70 modules (Python, bash, R), 10,000 lines, 30% comments. Not a big deal, but took more than four years to evolve to answer all the questions. Some programmers I know would surely have used 100 classes where 40 sufficed.)
OO was the best thing that happened to programming since the death of the punched card.
Functional Programming makes a lot of assumptions about the environment it runs in; so in OOP terms, programs that are functional run inside components or encapsulated environments; on the other side, OOP can leverage a lot from FP so as to achieve functionalities that are not dependent of the state of anything.
in order to understand the reason for the rise and fall of certain paradigms you need to understand an map the evolution of software systems and the adjacent data transport infrastructures. Small linear systems allow heavy application of algorithm driven design that favors functional design, such systems however are not flexible, there is a certain tolerance in which you can stretch the input and expect a valid output. Another problem with such systems is the way they mutate as they generate knowledge about the problem they are designed to solve- although as correctly pointed in the article linear execution is the way the processor runs the code, the learning curves and the changes that come with them are non linear and not continuous- this makes the scalability of such systems poor. On the other end big non-linear systems that address a domain of problems favor object design. This adds additional complexity because the design itself requires the build-up of meta-knowledge of the problem domain at hand, and a planned approach that transforms that meta-knowledge to a continuous coherent structure.
Going back to my initial point, when software development started the problems it addressed where operational and not systemic. Running an executable with a certain parameters generated output and removed itself from the memory. As the industry progressed however the problems became more and more complicated, and the programs started expressing more features. This itself did not require a paradigm shift, but with those features came the constantly growing code bases and the mutation of the problem sets. In a way you may say that software systems came to life- and life in its core is polymorphic. This rendered functional approaches progressively more difficult and expensive to maintain which naturally lead to the adoption of the object orientated design. As time continued to pass, we saw the downfall of mainframes that ran huge loads, and the rise of clusters- this pushed further the object orientated approach because even more internal states had to managed, and the synchronization between servers required better encapsulation. Ten the web happened- it took some time but with the markup as a universal application structure, the functional approach began to regain momentum- from applets to advanced scripting the DOM and the robust data transfer layer made small linear programs relevant again. This naturally lead to more and more features in the browsers which however enabled more and more complicated programs and so we arrived at the single page applications and the content management frameworks, and the cloud farms. All that made new software to come into life- and again it faces the life’s polymorphic nature. With the emergence of the IOT ecosystems we will see the same cycle again, we will go back to functional and than scale it to object. As a side note the conflict at hand here for me is similar to an argument which color is better the red one or the green one, without any context. If we add context we may have an argument that for example red pops up so it is better for errors than green, but then green is calm on the eyes so it is good for a background etc….The change in paradigms follows the lessons that life teaches, and the same is true with political systems, philosophical ideas, cultural phenomenons, morals and so on.
The discussion pops up everywhere once in a while. Arguments are exchanged why certain things are more easy to do with OOP or FP, inheritance is discussed but no conclusion found.
Next to computer science I studied cognitive psychology on university. What even most psychologists are not aware of is that the way of thinking in humans variies a lot. Most people have vivid visual memories of past events or locations they have visited. For other pepole it completely impossible to have a mental picture in their inner eye. Some people do think in words and sentences. They live with a narrator who almost constantly talks to them. Others cannot even comprehend the idea how this would be to have a inner voice talking to them.
Some people argue OOP is intuitive. For me to model any real world in stateful objects performing methods is as counterintuitive as it can be. When I read OOP program code I constantly think: “What a stupid way to do this” or “For what do I need a class only to do this simple thing?”. It makes me almost physical pain to look at these ways of modelling.
My mental model of the world outside does not consists of objects doing things, rather of actions I can apply on things, The house is not painting itself in yellow (house->paint(yellow)) but rather the house get painted in yellow: paint(house, yellow), and the car can get painted in yellow as well: paint(car, yellow). I don’t need to classes for house and car, or even worse an class paintable_thingy from which house and car inherit from.
I have studied computer science, but when OOP became omnipresent left development because I hated OOP programming and changed to a consulting career. I do think nowadays it’s the way of mental model in your brain which makes OOP suitable or unsuitable for you. And there is very little chance that one party can clarify to the other party why the see OOP as a good or bad thing to model the world.
Exactly the way I think
OOP in itself is not the problem. It’s the people implementing it. Organized people will find ways to organize code no matter what programming style, language style, framework they use. They will be well organized, know where to find things and do things in fastest possible manner.
On the other hand you give any clown a language that is easy and abstract concepts they don’t understand and they will create hell.
BTW, if you give my email to anyone, I’ll find you. 😉
The best examples of oop is (1) live reloading, and (2) hot reloading.
In live reloading, you have at least 2 components (1) application and (2) updater.
In hot reloading, you have at least 2 components (1) application and (2) shared library.
The stated components are the instances of classes. Newer versions of them are instances of Inheritances. Run-Time Linkage is required for hot reloading or else OLD fails from SOLID.
How come there is “OOP” in the title but the article is basically “OO (without P!) languages are popular, but FP can do the same”?
First of all, it is absolutely clear to me that there is not a slightest understanding what is the core idea of OOP.
For example this nonesence: > “and good OOP practice means you provide getter and setter methods to control access to that data”
^ That is what people write when they don’t understand OOP. Encapsulation means you hide data and expose *behaviour* and the main goal of the OOP is to have code answering “what”, not “how”.
You create objects, gift them behaviour via interfaces, hide their data via encapsulation and then your code becomes very fluent when done right:
Car ford = new FordTruck(new V8Engine(), new PickUpBody(), new OffroadTyres());
Driver driver = new TruckDriver();
That’s it. Next, when you need a new Engine, you simply replace it, as you do in a real life. Again, you tell computer what to do, how to do it (well, not at a business layer, at least). The data is something you don’t care about. The behaviour you do. That’s why OOP became very popular in enterprise – it is became much more easier to implement business logic, and it is exactly what real business needs.
This is exactly it
OOP is simply a way to organize your code. I started many years ago programming Ansi-C. Functional programming was great as long as the program was small, but as the sources grew, errors and side effects became hard to avoid and maintain. With more than 30 kB of source code most of my project development slowed down significantly.
With OOP things are very different. I still use some codes that are really old now, but still do their work. And over the time some classes have grown very big. Some of them compile more than 60.000 lines of code but are still easy to handle, as they have one single defined task.
So, from my view there is no question what´s best. For small projects, functional programming is quick and effortless. But if things grow, OOP is the most efficient tool to handle your codes.
For web programming things are not so easy, as HTML can only generate global references. I created a small web framework called DML (https://github.com/efpage/dml) which hopefully can overcome this issue. And there are other approaches like web components, that have a similar target.
I’ve read all the answers up to this point in time, I notice that the real problem as many mentioned explicitly or not is how we think basically, so, one model that would “fit” more into these two paradigms is a “message sending” paradigm, I think that is what OO is trying to be, lets change our way of thinking for a moment… in this world of message sending or signals, your model (function, object or data) can “change” (state, output or connections), based on messages.
So we can think of sending messages to one “thing” and if for example that message has to mutate 50% of state across the program, output or interconnections, then it will propagate accordingly, no need to model this in a strict functional or OO paradigm, so it gets proper human comprehension.
In the end of the day OO is just an incomplete evolution to “message sending”. Maybe the word “object” played with the minds of many people and inherently halted progression to a more… “Correct” evolution.
Of course like any paradigm, it’s always a discipline of how to “think”, if your brain can’t, then is the paradigm fault, we are humans, not slaves to paradigms.
OOP successful? If you mean widespread, then it is successful. However, I would say that it is not a success because the software I use on a daily basis that has been developed with OOP is just shitty, at every level. I have a shitty bug tracking app, a shitty time tracking app, a shitty database and a shitty communication app, a shitty email app, shitty collaboration app…..just a world of shit. Everyone of these apps were developed using OOP & Agile and all by different companies. The one thing they all have in common is their shittiness. OOP is harder to debug and refactor than any functional or procedural programming I’ve ever seen. The Trillion Dollar Disaster essay is spot on.
Well your title is certainly provocative, but inaccurate. Most technology seems to evolve in a Hegelian fashion because humans are driving it. As one becomes older they have the opportunity to see this more and more. The younger generation tends to be more vocal and champions their particular beliefs which are by definition myopic. Functional programming has a light shining on it now as the antithesis of the object based paradigm. As things evolve we will find some synthesis that includes concepts from both. Object oriented software evolved as way to manage complexity in systems that were becoming increasingly complex. It does this exceedingly well. If you have over worked on a very large complex system, being able to address it at different levels of complexity, hierarchically is a god send over the alternative. It is especially beneficial to other team members who may be new to a system and have to learn it. Having objects structured in the language of the domain being addressed makes this possible. It is also interesting to see a sophisticated object system become slightly more than the some of its parts as the virtual models shape themselves to the natural objects and relationships they represent. Larger, complex systems are its sweet spot.
It becomes less adept (without some care) when you need to distribute processing and do more complex algorithmic work over larger data sets and then bring that data together after processing. The immutability favored by the functional paradigm is a good fit for things like this. But from what I have seen, functional reads and flows very poorly. So a lot really depends on the domain you are working in and the problem you are trying to solve as to which paradigm might be a better fit. You also have to consider what hardware things are being run on, and whether you are working in a compiled or interpreted language. Software maintenance is another giant factor which is often overlooked. I think for proper treatment of the subject the entire SDL would need to be factored in.
In my opinion, looking at the pros and conns, functional programming has niche applications for which it is suited. As the current flavor of the year it is being used in many places where it doesn’t fit. I was currently brought on to a project using react-native with the functional paradigm. I can’t think of any environment that functional is worse at than a GUI based mobile app. It is a square peg in a round hole that I am stuck working on. Well I better get back to working on it.
Anyway, this is just my opinion. I thought the article was well written, well thought out, and currently relevant. I found it through my search trying to find ways to appreciate the functional paradigm.
Maybe we should have a look on the “scale” of applications. OOP was not born as a religion. People where simply not able to handle side effects in monolithic procedural programs, so OOP was a good way to build closed, self contained portions of the code. So, a class is basically a small program containing data and procedures. If FP helps to build those “units” in a better way, there is no good reason to use a less efficient style.
There is the tale about the banana, gorilla and the forest, that is used to blame OO. This is simply not true in most cases. OO gives access to the underlying classes in a way, it is not necessary to implement the underlying functions again and again. You do not get a simple banana, but a banana that can live and grow. If you only want a banana, there is no need to build in in an OO style. Object classes with deep inheritance (like windows UI-Elements), often inherit hundrets of useful properties the need to live in the Windows-Ecosystem.
I have written codes 25 years ago in an object style way, that are still in use and do a good job. The are not that old because OO is great, but because OO made it simple to keep them running in changing environments.
Surely we can learn a lot from the FP-style. But I cannot see any reason, why we should not build objects or classes following these principles.
A few observations:
– OO comes from the 1960’s (Alan Kay) not the 1980’s
– Different problems require different tools. OO is great paradigm for a (very) large class of problems, but not for all problems.
– OO does imply encapsulation, but it does not imply inheritance or polymorphism
– Some OO languages support useful features like traits and mixins which go beyond simple inheritance (Scala)
– OO is a methodology, not a language characteristic. You can write procedural code in Java, for example, and OO code in C
– Getters and setters are indeed evil (Alan Holub)
– Functional design patterns are commonly used in OOP
– Far too many Java APIs are not especially OO
– Far too many people do not really understand OOP at all and miss out on its power
“Millions of developers quickly moved to Java due to its exclusive integration in web browsers at the time.”
– This is definitely not the case. I was at Sun in 1997. Java was never successful in browsers or on the desktop. What made Java so popular was its place as a very robust language for the server side (static type safety, garbage collection, secure, etc). (WebLogic, Tomcat, etc)
BTW, lambdas were under development in 1997, along with many other features that are only emerging today. Sun’s Java team is ultra-conservative when it comes to introducing new features. It might surprise some to know that modular Java was also under development in 1997. Also, Java already supported lambda functionality via anonymous classes and interfaces.
Excellent article for a fascinating discussion. I read every comment; thanks to everyone and especially the author.
If everyone hates it, then OOP will not be widely spread. Its that simple, its widespread because not everyone hates it, and most do not hate it. It just happens that the ones who hate it are the most vocal about their hatred, its like a political campaign. The silent majority do not speak about it, they are too busy writing practical softwares, be it OOP or other paradigm that work for their applications.
I woke up this morning (2022) knowing that oop is not out of the box. The worst part is that in the job offer they were looking for Java engineers.
Dr. Alan Kay at OOPSLA 1997 Alan Kay at OOPSLA 1997. <-the man who coined OOP. lets just leave this for all to argue about.
https://www.youtube.com/watch?v=oKg1hTOQXoY&t=668s #remove to see link video posted under/ for historical preservation purposes .
OOP is popular in big corps because it makes programmers easier to replace and outsource. Simple as.
I know the clear opposite of that. A medieval castle-carving stonemason.
It is extremely intuitive to think in terms of creating a thing and changing a thing and asking a thing to perform a task, and deleting a thing, so OOP is canonical.
If you erased all knowledge of OOP from history with magic, we would discover it again, immediately. Inevitably.