What I wish I had known about single page applications
[Ed. note: While we take some time to rest up over the holidays and prepare for next year, we are re-publishing our top ten posts for the year. Please enjoy our favorite work this year and we’ll see you in 2022.]
As a Java developer, I have spent most of my professional life working on the parts of software systems that most people don’t see. The so-called back-end of the software stack. But lately I have found myself wanting to branch out, and dabbling more in HTML and UI development.
A couple of years ago, this natural curiosity led me to start a new side project. My project was meant to be a hobby application that would only be used by me and a few friends, so I didn’t spend too much time thinking about a long term roadmap or requirements. The main goal was to get it working fast.
I settled on JHipster, a development platform for building web applications using modern technology: Angular, React or Vue for the client side, and Spring plus Gradle or Maven for the server side. It’s been around for years, is very well documented, and has great community support.
Within a few weeks I had a functioning application that met all my needs. But a funny thing happened soon after I launched. Other people started using the application. Knowing I had created something useful for a large audience was really satisfying. And so I did what any other developer who is already stretched thin and trying to balance a full time job and a family and hobby projects would do: I spent my nights, weekends, and every free moment I had working on it.
However, the more I tried to improve it, the harder things got. I spent a lot of time looking up how to do new things that weren’t part of the boilerplate setup. I was learning some of the limitations that now felt like major roadblocks. And after a few months, it became clear to me that my choice of technologies was becoming a hindrance to making the application better. Ultimately, I decided to re-write most of it using frameworks that were more familiar to me.
I want to pause here and clarify: JHipster and Angular are not bad platforms. Far from it. I’d recommend them in heartbeat for the right project.
When I say they were becoming a hindrance, what I really mean is that my lack of knowledge of the technologies had come back to bite me. For all the reasons I had chosen them, there were plenty of other reasons that might have made me think differently, had I known about them.
It was a classic developer lament: I didn’t know what I didn’t know.
And what I came to realize is that my application was suffering in several key areas that were a direct result of the platforms I had chosen. Things like SEO, social sharing, and caching. Features that didn’t matter for a hobby application, but were vital to the long term success of a growing product.
Even though I ultimately ended up re-writing the application and it has continued to thrive, I think it’s important to reflect back on the early days of the project. There are so many lessons to learn from both success and failures. I wanted to dive into some of the things I learned the hard way when I wrote my first single-page app, and hopefully help others who find themselves in the same boat.
Anatomy of a single-page application
Before diving into some of the issues I ran into, it’s worth breaking down the basic principles of a single-page app. This is meant to be a high level overview and not a technical deep dive on any specific platform.
And thus the world turns. You click a link, your web browser sends the request to the server, and the server sends back some HTML. Every response back from the server is the full HTML document required to render a web page.
A single-page app breaks this paradigm. The web browser sends the initial request and still gets back some HTML. But the response from the server is just a bare bones HTML document with no real content. On its own, this HTML is generic and doesn’t represent anything specific about the web site.
By all outward appearances, the application behaves like a traditional web site. The user sees HTML with images and buttons and interacts the exact same way. But under the hood things are very much different.
Search engine optimization
Search engine optimization (SEO) is the process of formatting web page content so that is easy for web crawlers to understand it.
Just like a web browser, Google and other web crawlers request the HTML contents for pages on a web site. But instead of rendering it like a web browser would, it analyzes the HTML to semantically understand what the web page is about.
The reason I knew this is because Google was telling me that it didn’t understand my content. When I looked at the search analytics for my site, it was ranking for exactly one keyword. And that keyword had nothing to do with my website.
As the graphic shows, Google thought my website was related to Maven proxy configuration. Needless to say, my hobby project was not in any way related to the open source Apache project.
So what was happening? My website had hundreds, if not thousands, of unique web pages with varying content. Yet, Google only showed it in search results to people searching for the words “mvnw proxy”. Even if SEO wasn’t a main focus of mine, after a few months Google should have been able to ascertain that my site wasn’t about Maven proxy configuration.
And like most single-page apps, the default HTML included lots of helpful developer information that is intended to be used for trouble-shooting, but never actually displayed in a web browser when things are working properly. Even worse, the template is the same for all URLs on the site, so Google got the same (wrong) interpretation for every page it crawled.
Another area I ran into problems with was social sharing. My website allowed users to create dynamic content, and also included lots of static content that others might find useful. And in the early days of launching, I indeed saw that people were sharing links to my websites across various social media platforms:
Social networks, much like search engines, rely on the content in web pages to understand what the web page is about. Unlike search engines though, they rely less on the visible content of the web page (the text and images humans see) and more on the metadata (the stuff inside the HTML that us humans don’t care much about).
For example, when you share a link to a website on Facebook, the first thing that happens is Facebook reads the webpage and generates a nice preview of that article. The preview has a title, a line or two of descriptive text, and an image.
But those previews are not generated magically or using some sophisticated AI algorithm. Facebook is relying on metadata inside the HTML header area to create previews. It’s entirely up to website owners to provide the information Facebook needs to build a meaningful preview of every page on their website. Many CMS systems such as WordPress make this really easy with plugins. But I was writing a brand new application from scratch and would have to create this metadata on my own.
What this meant for my users is that every link from my website that was shared to Facebook and other social networks was generating the exact same preview. Whether it was users sharing their custom content, one of the static pages that I auto generated, or even the home page, every link shared on social media had the same preview.
Functionally speaking, nothing was wrong. A user could still click on the preview and be taken to the correct URL on the site. But I didn’t like the idea that the preview wasn’t helpful. After all, I was trying to grow my user base. If someone shared my content to their friends and family, I wanted to have the best chance of people clicking those links so they could discover the site.
Another area I quickly became concerned with was caching. With a small user base initially, I never worried much about expensive database queries or page load times.
But as new users started to use the website, I started to get worried about these things. Were my queries as performant as possible? Was I taxing MongoDB with too many requests? Would new users give up if a page took too long to load?
I’m never a fan of premature optimizations, but this was an area that I felt would become a problem quickly if things kept trending the way they were. So I started thinking about how to improve some of the MongoDB queries and page load bottlenecks. And one of the first things that came to mind is caching.
I’ve worked with a variety of enterprise caching technologies such as Coherence, Ignite, Redis, and others. For what I was looking to do, these all felt like overkill. Plus, they would add to the compute and memory costs of a project that was still technically only a hobby.
Instead, I decided this was a perfect use case for CloudFlare. I was already using CloudFlare as my DNS provider because they provide a ton of great features for free. And one of those great features is page caching. It’s free and requires no additional coding on my part.
Here’s how it works. CloudFlare acts as a reverse proxy to my website. This means all requests are really going through their infrastructure before being forwarded to my website. Among other things, this allows them to cache my server responses across their vast network of global data centers. All I have to do is configure which set of URLs I want them to cache and how long they should be cached, and CloudFlare handles all the heavy lifting. All without me writing a single line of code.
Periodically, the cached content will expire and CloudFlare will need to pass the request on to my server. But on aggregate, this approach can drastically reduce load on your server because far fewer requests end up getting through. As a bonus, the user experience is improved because returning the cached content is much faster than having your web server generate it from scratch.
This all sounded great in theory. But in practice, it wasn’t quite working as I expected. After making the change and enabling page caching, there was no appreciable difference in my MongoDB cluster resources. I was expecting to see an appreciable decrease in resource utilization as far fewer queries would be made. But instead I saw things mostly staying the same.
Another general area I wish I had been more cognizant of is technology envy. As a developer wanting to improve my skill set and value to potential employers, I’m constantly on the lookout for new technology. But sometimes it’s easy to get envious of what others do, without realizing the skills you already have are valuable too.
This happens a lot on social media. I’m constantly in awe, and frankly a bit jealous, of what some of other people create and build. I see buzzwords and technology and think to myself, “I really ought to know how to use these things.” And while there is value in learning new things, it’s more important to be adaptable and know the right tools for the job. As the saying goes, a jack of all trades but a master of none.
And that’s exactly what happened when building my first single-page app. I was so caught up in the short term excitement of learning a new technology, I never stopped to consider how that decision would impact the future of the app.
And eventually this came back to haunt me. As new feature requests came in or I had new ideas, I found the choice of framework ever more frustrating. Whenever I wanted to add a new page, for example, there were multiple areas of Angular code to touch: routers, controllers, services, templates, etc. I was constantly looking up how to do things, and went down my fair share of rabbit holes trying to figure out “problems” that eventually proved to be my own doing.
In a sense, this is how we learn. We can take all the programming boot camps and courses under the sun, but real learning comes from frustration. Real knowledge and understanding come from hours spent stepping through code in a debugger. Unfortunately for me, this project had taken on new meaning and it was no longer the right context for me to explore and learn.
This isn’t meant to be a critique of single-page apps on the whole. It’s a cautionary tale that I think many developers can relate to. The excitement of learning something new, creating a functioning application from the ground up, and then reflecting on the lessons learned along the way. And even though I eventually re-wrote my application using technology I was more familiar with, the experience was invaluable.
Mistakes are good. They help us learn. They help us make better decisions down the road. I’ve already got new ideas for how I can use single-page apps for other projects I am working on. And this time I’ll have confidence knowing they are the right tool for the job. I can also now confidently speak to clients and tell them when I think a single-page app makes sense for their use case. And I can discuss my experience with colleagues, helping them to avoid some of the pitfalls I had to deal with. In the end, this was still just a hobby project, and the experience I gained will go a long way.
In a perfect world, developers would have all the information they need up front. Every requirement, every change in scope would be known ahead of time. But this isn’t how our industry works. We create roadmaps as best we can, and we choose technologies that make sense given what we know. More often than not, our best laid plans don’t pan out the way we envision. And that’s ok.Tags: single page application
Unfortunately, this is a recurring theme. It started with RAD tools like Access, which allows users not well-versed in database design to write applications really fast, but just as quickly paint themselves into a database design corner. Scaffolding tools that allow you to knock together CRUD web applications quickly are another example, and there are many others.
The latest salvo is the “Low Code” movement. I shudder.
But without a SPA on the back end you have a router, controllers and actions, HTML template languages with includes and blocks. Angular, react, Vue and svelt move the paradigm to the front end and vastly simplify the back end. BE is all just serverless functions on API endpoints. No more MVC, Ruby on Rails, Django or Laravel to stress over.
On the topic of low code. The code is not and never has been the issue. The challenge has always been around managing polluted and incomplete data and modelling real world objects and concepts. Low code doesn’t solve those issues and just removes the tools you need to solve the issues.
This article is great example of how lack of understanding and knowledge of certain technology limits your perspective. Sorry to say, but you haven’t achieved the purpose of your hobby project. Following tutorials, that touch only the tip of the iceberg, is not how you learn any technology.
SEO and social sharing is a long solved problem with SPA applications, in some frameworks like Angular for example it is really a matter of single configuration setting.
And for the caching your problem is not caching the right resources. With SPA one usually use CDN to distribute the SPA base (html, js, css) and API like organised backend. When you need cache you ought to cache the API calls that usually return JSON.
May I please know the single line configuration for Angular that you are taking about? Thanks!!
Well there are solutions to your problem. Hybrid apps that are prerendered for the first page-load and work like an spa afterwards. Checkout nuxt.js for example. There are also react alternative like next.js
I understand your points, but in most cases, SPAs can’t eliminate the backend as much as they’d like us to think.
Especially clientside rendered / generated frontend applications will often need a weak backbone that doesn’t contain any valuable information or generated functions, except if you would maybe use an asynchronous encryption algorithm for the generation of these random numbers and secrets.
And then you end up slapping it ontop of a new or existing Rails, Django, Laravel, Symfony project anyways.
So you end up with a server that conditionally renders different single page applications until you realize that one is sufficient.
You then begin moving more logic from the server backend to the clientside backend, shuffle things around, build API endpoints and run into first problems with authorization. This might take one day to a whole week, if you not choose to ignore it after playing around with various different formats of token encryption.
This leaves you with an MV(MVVM)C application, an utterly terryfing and ugly, hard to maintain bastard.
I’d say that if you really those languages and are profficient well enough, this might not be a bad choice. But Owners or Shareholders might like it better if your app was just a plain rails application, or even better a symfony project with miserable spaghetti code. They want it done quickly and cheaply. If you’re working solo or with friends, this shouldn’t matter so much.
Not true at all. Just because you have a normal website doesnt mean you cant use JS and AJAX. Regardless the technology most websites use backend. HTML templates are so simple just nesting html.
My wife was on a call with a low code vendor and they were talking about the skills needed to use the “No Code” tool. I kid you not they said software developers with experience writing code, debugging, and building complex systems is the skill set necessary for success. I laughed and said well at least he’s being honest.
Wow, how can’t he smell the irony there? I think the RPG Maker tool really got it right, CryEngine as well. Programs like Fruity Loops also let you easily create digital music without using any console or writing any code, although you’ll interact with sinus, cosinus and tangens directly (visually).
I think people that write “low code” / “no code” solutions should look at Game Engines and Music Programs designed for beginners. GameMaker and Macromedia Fusion are also superb examples. But this will have to wait until the world starts using big data to connect different islands of knowledge instead of focusing on surveilance and marketing.
Sorry to be pedantic, but the FruityLoops was renamed “FL Studio” in 2003 😉
Well written and thoughtful. Thank you for the article Michael.
I’d encourage you to add a paragraph, however, at least communicating that it IS possible to achieve all of the goals that frustrated you with SPA technologies. Most specifically, I’m talking about server-side-rendering. Not all SPA frameworks can, but the most popular ones do.
That said, I want to highlight that I don’t disagree with your points. SPA FWs are not often the best tool for site that need to produce structured markup in non-JS environments. But they don’t need to be discarded when there are competing requirments.
Yep I have definitely experienced all of these issues, but with server-side rendering (and with the modern google crawler) these things mostly just go away.
Well 2 out of 4 anyway.
Caching and Technology Envy are not easily solved!
You can cache “ajax” requests and and achieve the same results as with “page caching”.
Surely the definition of a SPA is that the rendering is happening on the client side, rather than a new, pre-rendered page being requested from the server each time. So if you use server-side rendering is it still a SPA?
In an isomorphic framework, you can run one application on both the client and server side and it will still be SPA. One such isomorphic framework is Aurelia
The entry point is rendered server-side, but subsequent requests within the app hare handled by the app once it’s ‘hydrated’, so it’s still an SPA. Just one that the crawler can request individual pages of.
Yes. You can hydrate the initial state of the page, and let the SPA take over from there. So the initial HTML won’t be a blank template, it’ll be all fleshed out. However, subsequent interactions will be loaded via AJAX instead of a full reload.
I agree with Gary that the article would benefit from a mention of server-side rendering; else you risk creating the same problem you’re trying to solve: developers reading short articles and making underinformed decisions
Very helpful article.
Thanks for sharing.
In the same boat actually.
I used to develop SPA’s, and I’ve had the same headaches, and the same problems. Server side caching shouldn’t be required in 2021. Google and other Social Media sites are meant to be able to crawl them with headless browsers, but I’m seeing this isn’t the case. I think this is why Google are pushing people towards AMP for SPA’s.
Suggesting headless browsers to crawl the web? Reconsider the scale of the problem.
Billions of websites, trillions of URLs and 1000+ petabytes of data.
I don’t agree with your comment “Server side caching shouldn’t be required in 2021”. That is like saying a screw driver is obsolete in 2021. It doesn’t fit every use case, but there are many where it still makes plenty of sense.
Google pushing people toward AMP is also about control. There is plenty of controversy around it because AMP is in conflict with the open web.
IF you’re just starting out developing an SPA, there are far more powerful and easier-to-use tools. The first one I’d recommend is Meteor.js.
Meteor’s project scope is way too big for the small number of developers they have, and there’s too much “Embrace and Extend” mentality there that makes it very difficult to keep up with the JS community, and grow beyond what Meteor locks you in.
It took them, what, 3-4 years to update their Node.js version compatibility?
I hope things have gotten better since the 3 or so years ago when I had contact with it, but I would not recommend it to anyone looking to grow large and up-to-date.
As someone who has “been there” I had my own horror stories to bring and was expecting… something else. SEO and social sharing are gotchyas but pretty easy to work around, you do admittedly need to deal with them specifically and in their own way. In a traditional server-side web application we tend to take these things for granted, many of the implications of client-side development for the transitioning developer are overlooked, especially caching. But that’s not to say that you can’t make it work. I guess that’s what I was hoping to find here, a summary of the pitfalls and your creative solutions to them. You’ve articulated some issues but offered no solutions, which implies there are no solutions which is a bit un-fair on the technology, but like you said, you didn’t know what questions to ask, so you didn’t get the relevant advice.
I think you’ve glossed over some bigger issues like initial load times or how to forcibly clear the client’s cache, because now local (and content distribution) caching can get in the way of code updates getting to your clients.
My personal experience with SPAs is that ultimately they are good at some things, but they don’t have to be everything. I find that OOTB many of the SPA templates will lead you to write horribly inefficient code, our dev boxes will have super fast response times but some clients may lag, or over time performance bloats out because the initial code examples were designed to support large amounts of data or frequent updates, that was probably the most frustrating aspect, if only the templates and code examples were designed for complex UIs with many nested templates or hundreds of rows in repeating lists, then I might have started out with a much better structure and experienced greater success.
Finding a good balance between client and server side rendering is often a key to success in SPA.
Thanks for your insights Chris but you also offered no solutions to the issues raised. 🙂
Very helpful article.
Thanks for sharing.
In the same boat actually.
Very important article especially for junior developers like me who use frame works when developing SPA, it’s worth sharing .
thanks very much.
I got a lot out of your article and your experience. Also, I learned that I’m not so much different from the next developer. You work with what you know, learn as you go, change course as you go, hit road blocks, dead ends and sometimes completely u-turn. The one thing that stays constant is that you grow! The process of your whole experience can be applied to what all developers have gone through, from one time or another, and I’m glad you said it! Very encouraging to hear you say, “…Sometimes it’s easy to get envious of what others do, without realizing the skills you already have are valuable too.” I have to keep reminding myself of that! And that it’s okay not to know everything…. to stop feeling like I have to play catch up, when I’m really already ‘there’, I know what I am suppose to know and, I’m right where I am suppose to be! It’s also hard to learn something new when you don’t have a project to apply it to! I feel I should know everything, but that’s impossible, and would take the fun out of our field… the ever-changing, never boring, world of code, design and development! Thanks for sharing your uplifting and inspiring experience!
I’ve seen a few sites that switched to single page applications and had very surprising results. The system was inserting new material in the middle as you went down the page. No matter how much you scrolled, you never reached the information at the bottom because it would always be pushed further down within a half second of appearing.
No code tools can be a big problem because you always have to have some way of telling the system what to do. If you have code, you can actually point to things to be fixed and I’ve had times when I’ve used grep to find bugs. It’s like Windows 8. They tried to simplify so that low level users could achieve more. The problem was that the low level users still couldn’t do the complicated stuff and the power users went crazy because everything was changed. The things that look like they should make things simpler actually make it impossible for people to understand and debug the problem.
And then you get the programmers who are unwilling to admit when they’re in over their head. I had situations where I was trying to figure out how to enable other programmers to steal the information because I knew where the problems were but they weren’t willing to listen to me. Managers get these “code free” systems, hire inexperienced programmers, and then don’t want to admit that they are having problems.
Thanks for the well written article.
One remark about SEO and server-side rendering: in my experience, Googlebot (using the Chromium rendering engine) is often able to crawl SPA’s pretty well. I’ve had good results with websites built with KnockoutJS (in the past) and more recently with Angular. I know there is no guarantee that indexing of SPA’s just works, but I think it is worth experimenting with before switching to server side rendering. What worked well for me was a combination of an SPA and server-side rendering for just meta tags (and other structured data that is needed for social sharing).
“Whenever I wanted to add a new page, for example, there were multiple areas of Angular code to touch: routers, controllers, services, templates, etc.”:
Absolutely agree, no one loves to write boilerplate code. I think it’s worth mentioning that most SPA frameworks offer a CLI for this, and if you are starting from scratch and prefer to work visually, tools like JitBlox can save you lots of time and frustration.
Just wanted to ask what did you end up using in the end?
You could have possibly integrated https://prerender.io/ while this feels like an annoyance (an entire instance of chrome that caches your seo perfect pages in a cache (redis is an option) and then looks for a crawler, and serves them that version of the page, with all the html in place.
While this feels a bit rubbish, it’s actually pretty much the only option, unless you are happy to use a node server which can do SSR, but I want to have python as my main webserver, so I just accept that I pretty much have to run this along side a serious site that is built as a frontend framework. I wish there was a way any backend technology could render SSR components (react or vue) but it looks like you can’t.
Would love to hear if anyone knows a more elegant way around this!
Actually, there is a PHP server-side frameworks (October to be specific) where they have the concept of components.
Not to say that this is the only option out there, but that’s what I know at least.
I know this is going to sound like the classic snobby programmer telling you you’re wrong, but the elegant solution (sadly) is to not use Python.
I’ve developed apps with Pyhton as the backend server, but it’s always for internal tools or personal pages I don’t share. If it’s a site meant for the general public to access, especially if I want search engines to find it, I would probably use another technology that’s more suited for the task.
As much as I enjoy working with Python/Pyramid, now that I’ve worked with node for a while, I’m not seeing a case where I go back to py as a backend, even for my personal stuff. It might just need to be a bandaid that’s gotta be pulled.
I have to admit, it’s satisfying to see widely-read blogs reaffirming the reservations I’ve always had about the SPA concept back long before terms like server-side rendering and isomorphic web apps got coined.
It just feels like, by the time you’ve finished discovering all the things you took for granted about the traditional approach, the value proposition for SPA isn’t so clear.
The one problem that was overlooked is that for credibility sake a webpage and its content should always be known ahead of making a request. Using a SPA that relies upon JS for the loading of content can never be fully understood until after it has been delivered. This is an obvious problem area as malicious content can be hidden with ease and not discovered until too late. Machines are virtually blind to your content as well, as you pointed out.
A better approach and one that I employ is to first serve up everything from the server. As necessary, make good practical use of JS to update any content as necessary. This avoids all the SEO problems altogether, ensures the content of the page is well understood for both search engines and real people, plus you can cache the server requests and the user only needs to load/update the content they choose.
If you still feel the back end of your system is not flexible enough to make life easy, then I would suggest your back end design needs reconsideration. I am using a unique hand crafted system which allows for me to go this route and still maintains structure and flexibility for any curves the road brings.
I just did a full rewrite of my site as a SPA. I’m not using much in the way of frameworks, just jQuery for the client and Bootstrap for the rendering. Perhaps it’s the nature of my application that a major framework like VUE isn’t really needed, but I prefer the freedom to not being tied to the more intrusive framework systems. For development shops frameworks seem like a good choice because of the increased ability for code reuse.
“I did what any other developer who is already stretched thin and trying to balance a full time job and a family and hobby projects would do: I spent my nights, weekends, and every free moment I had working on it.”…
So true it hurts
SPAs have their place. They are very useful for enterprise applications and apps like Gmail (for instance). In my use cases, you are loading the index, logging into the application, and using functionality that isn’t public.
When you start adding things like server-side rendering, you might as well start with Django, Rails, or ASP.NET. They are proven tech that works. They aren’t the latest and greatest, but they work perfect for that type of use case.
I’ve made a career path simmilar to yours. I’ve been a developer for the past 20 years. Let me chip in.
SPAs have a reason. They are the logic step right now. Why? Even in the early days of IE5 and netscape, there was this idea in the horizon that a full page postback was a mess in many many ways and was a bliss in a few. I remember writing my own async calls with js or vbs (yeah, vbscript) to make quick “side” calls that would return plain text to be acted upon. So there was a need, even back then, as mainstream active server pages frameworks were climbing to the top. I started with php and “classic asp” before .net.
Years later, user experiences were much richer, and the amount of processing power the server had to spend crunching loads of client-related code, sometimes forcing you to have that in mind even in your business layer (you know, processing stuff in a way that would help how the response would have to be built), making sure the postback was more or less seamless to the user (never really was), and then the jQuery days, then the “ajax” days, these were all preludes to the SPA era. And I tell you, I had a rough learning curve with angular, in a time of my life when I thought I had dominion over these kind of things. But then I crossed to the other side. I just love rest apis and separation of concerns, also, having a robust framework for client side, etc. Its just too clear to me this is the path, at least for now. SPAs have their low notes, just like any tech does. And that’s fine. I would never go back to full page postbacks or even “partial” postbacks. My 2 cents
I clicked through to read this, expecting some nuanced discussion of server-side rendering, technologies like Hotwire, etc… But no.
I appreciate that not everyone has experience working with SPAs, but this article toes the line of “popular thing considered harmful”, as the comments demonstrate. The ‘gotchas’ involved are fairly basic – who actually expected cloudflare to cache API calls? That would be insane on a global basis – only you the developer could specify which ones are safe to cache and which aren’t. And I’m pretty sure you could actually force cloudflare to cache the calls that are expensive if you wanted.
It is indeed important to reflect on mistakes made – but this article makes it sound like if you need any of those three things, you should avoid an SPA. I would have appreciated a discussion on *solutions* instead of giving the impression they’re intractable. Facebook is *itself* an SPA, and deals with all of these issues. Unless your app was *really* small, taking a step back and looking at how SPAs work, and fixing the issues involved, probably would have taken far less time than the big-bang rewrite did. And that would have made for a fine article!
As if adding SSR support to an SPA was not pretty much rewriting the whole app…
It’s….not? There are ways to hydrate the full JS app server side and send it over, using the SPA code itself. Right now the defacto standard in JS documentation is to *call out* methods that *aren’t* ok in SSR.
It’s because they made a multi-page blog post of a backender taking baby steps as a full stack. Imagine letting a front end engineer write a several page article about their first attempt at the backend or DevOps. It wouldn’t be anymore informative, and would probably reinforce misconceptions as this article unfortunately does.
Most popular technologies have their uses. Most developers will run into trouble when using a stack outside of it’s strengths. Very experienced or talented engineers may be able to compensate and accomplish things that appear like wizardry to others, but that does not make for good generalized advice to a broad audience.
Most articles tend to be written by people with strong predilections that tend to reinforce their biases as the axiomatic solution either as a result of novices providing an AAR or experts taking their vast experience for granted. Few articles are written that are honest enough to admit that there are different horses for different courses.
I recently did same I began learning div section html and left java alone but i feel guilty. Java was starting to look easier and easier but I was starting to feel like I was never going to build something real since experience in actually doing something useful with a programming language take time and the right mentorship or company. The cool thing about the front end is you see the results of the code really quick .Great article!
There are lots of server-side pre-renderers that solve all three of your problems. Why aren’t you using one of those?
I’m developing a web application currently, and I was struggling to choose between single-page and traditional HTML. After thinking about an SPA, I kept thinking… “wait, it would have this problem, wouldn’t it?”. I’m so glad a professional article exists that confirms my suspicions.
You should take a look at Angular SSR, it specifically mentions Search engine optimization and updating the meta tags in its Pros.
I would check out Blazor. In my view, it is a whole lot nicer than Angular or any of the other “heavy” frameworks.
A very useful and insightful article. Glad you went through that experience and have come out a better developer on the other side.
I can relate to that and have become a better developer myself through similar experiences.
All the best for the future.
Of course you may choose to use SPA’s for other reasons, e.g. because you want your mobile and web apps to have the same backend, or in cases where there’s a *lot* of complex interactivity, but for most form-based websites it’s way overkill.
You never said what you replaced Angular with. Spring templating engines like Thymeleaf?
Probably it works great.
Curious. Did you solve for the metadata problem? What I am thinking of doing is fronting my application with a backend endpoint that statically renders the SPA for the first page load and subsequent navigation will use SPA routes. Does that seem like a good approach?
Interviewer: do you know React?
-I’ve used it a few times but I can build an excellent website with Java.
-we’ll be in touch
Remember SOAP? People overcomplicated things for years then eventually learned.
The actual quote goes “A jack of all trades is a master of none, but oftentimes better than a master of one.” It’s a compliment, not a derision. It understands the value of knowledge and skill overlaps. The point never was to make fun of people who don’t overspecialize, quite to the contrary. Specialization has its place, but a generalist is usually a better choice unless you specifically have very urgent need for a particular deep specialization (and can actually afford it).
It’s weird for the misquote to pop up on a software development post of all places – how many programmers do you know who _aren’t_ generalists? 😀
I am building a SPA, I was planning to create a static version of my 3.2 million pages, with associated page map, is that a bad idea?
I will suggest you to use server-side pre-renderers that can make things pretty much easy in these cases. @John yes a really bad idea.
I got upset with Angular material for my website.
From SEO angle it is the most horrendous thing.
Now I am developing it on Hugo.
Benefits of Laravel Single-Page Apps & Vue Crud
hmmm.. many people are not aware that the quote, “Jack of all trades, master of none” is incomplete.
The full quote is “Jack of all trades is a master of none, but oftentimes better than a master of one.” It is supposed to be a complement.
As a backend developer I kinda like SPA’s because it means I don’t have to mess about coding templates which were always a pain. I can just throw the data to an API end point and the front end guys deal with it. Frees me up to deal with other things.
Very interesting article, thanks for sharing!
So when you have a few minutes take a look at it… I will ever take this tech stack again for all SPAs I need to deal with in the future… It’s really worth a look…
What did you use on your redesign of the application? Will you share a link for us to watch?
Never wants to waste a day in the classroom.