Practical Ways to Write Better JavaScript
In our 2019 Dev Survey, we asked what kind of content Stack Overflow users would like to see beyond questions and answers. The most popular response was “tech articles written by other developers.” So from now on we’ll be regularly publishing articles from contributors. If you have an idea and would like to submit a pitch, you can email pitches@stackoverflow.com.
Hey there, I’m Ryland Goldstein, a product guy working on Reshuffle over at Binaris. This is my second piece for Stack Overflow. Let’s dig in!
I don’t see enough people talking about practical ways to improve at JavaScript. Here are some of the top methods I use to write better JS.
Use TypeScript
The number one thing you can do to improve your JS is by not writing JS. For the uninitiated, TypeScript (TS) is a “compiled” superset of JS (anything that runs in JS runs in TS). TS adds a comprehensive optional typing system on top of the vanilla JS experience. For a long time, TS support across the ecosystem was inconsistent enough for me to feel uncomfortable recommending it. Thankfully, those days are long behind us and most frameworks support TS out of the box. Now that we’re all on the same page about what TS is, let’s talk about why you would want to use it.
TypeScript enforces type safety
Type safety describes a process where a compiler verifies that all types are being used in a legal way throughout a piece of code. In other words, if you create a function foo
that takes a number:
function foo(someNum: number): number {
return someNum + 5;
}
That foo
function should only ever be called with a number:
good
console.log(foo(2)); // prints "7"
no good
console.log(foo("two")); // invalid TS code
Aside from the overhead of adding types to your code, there are zero downsides to type-safety enforcement. The benefit on the other hand, is too large to ignore. Type safety provides an extra level of protection against common errors/bugs, which is a blessing for a lawless language like JS.
Typescript types make refactoring larger applications possible
Refactoring a large JS application can be a true nightmare. Most of the pain of refactoring JS is due to the fact that it doesn’t enforce function signatures. This means a JS function can never really be misused. For example, if I have a function myAPI
that is used by 1000 different services:
function myAPI(someNum, someString) {
if (someNum > 0) {
leakCredentials();
} else {
console.log(someString);
}
}
and I change the call signature a bit:
function myAPI(someString, someNum) {
if (someNum > 0) {
leakCredentials();
} else {
console.log(someString);
}
}
I have to be 100% certain, that every place where this function is used (thousands of places), I correctly update the usage. If I even miss one my credentials could leak. Here’s the same scenario with TS:
before
function myAPITS(someNum: number, someString: string) { ... }
after
function myAPITS(someString: string, someNum: number) { ... }
As you can see, the myAPITS
function went through the same change as the JavaScript counterpart. But instead of resulting in valid JavaScript, this code results in invalid TypeScript, as the thousands of places it’s used are now providing the wrong types. And because of the type safety we discussed earlier, those thousand cases will block compilation and your credentials won’t get leaked (that’s always nice).
TypeScript makes team architecture communication easier
When TS is setup correctly, it will be difficult to write code without first defining your interfaces and classes. This provides a way to share concise, communicative architecture proposals. Before TS, other solutions to this problem existed, but none solved it natively without making you do extra work. For example, if I want to propose a new Request
type for my backend, I can send the following to a teammate using TS.
interface BasicRequest {
body: Buffer;
headers: { [header: string]: string | string[] | undefined; };
secret: Shhh;
}
I already had to write the code, but now I can share my incremental progress and get feedback without investing more time. I don’t know if TS is inherently less bug-prone than JS. I do strongly believe that forcing developers to define interfaces and APIs first results in better code.
Overall, TS has evolved into a mature and more predictable alternative to vanilla JS. Developers definitely still need to be comfortable with vanilla JS, but most new projects I start these days are TS from the outset.
Use Modern Features
JavaScript is one of the most popular (if not the most) programming languages in the world. You might expect that a 20+ year old language used by hundreds of millions of people would be mostly figured out by now, but the opposite is actually true. In recent times, many changes and additions have been made to JS (yes I know, technically ECMAScript), fundamentally morphing the developer experience. As someone who only started writing JS in the last two years, I had the advantage of coming in without bias or expectations. This resulted in much more pragmatic choices about what features of the language to utilize and which to avoid.
async
and await
For a long time, asynchronous, event-driven callbacks were a unavoidable part of JS development:
traditional callback
makeHttpRequest('google.com', function (err, result) {
if (err) {
console.log('Oh boy, an error');
} else {
console.log(result);
}
});
I’m not going to spend time explaining why the above is problematic (but I have before). To solve the issue with callbacks, a new concept—Promises—were added to JS. Promises allow you to write asynchronous logic while avoiding the nesting issues that previously plagued callback-based code.
Promises
makeHttpRequest('google.com').then(function (result) {
console.log(result);
}).catch(function (err) {
console.log('Oh boy, an error');
});
The biggest advantage of Promises over callbacks is readability and chainability.
While Promises are great, they still left something to be desired. For many, the Promise experience was still too reminiscent of callbacks. Specifically, developers were asking for an alternative to the Promise model. To remedy this, the ECMAScript committee decided to add a new method of utilizing promises, async
and await
:
async
andawait
try {
const result = await makeHttpRequest('google.com');
console.log(result);
} catch (err) {
console.log('Oh boy, an error');
}
The one caveat being, anything you await
must have been declared async
:
required definition of makeHttpRequest in prev example
async function makeHttpRequest(url) {
// ...
}
It’s also possible to await
a Promise directly since an async
function is really just a fancy Promise wrapper. This also means the async/await
code and the Promise code are functionally equivalent. So feel free to use async/await
without feeling guilty.
let
and const
For most of JS’s existence, there was only one variable scope qualifier: var
. var
has some pretty unique/interesting rules in regards to how it handles scope. The scoping behavior of var
is inconsistent and confusing and has resulted in unexpected behavior and therefore bugs throughout the lifetime of JS. But as of ES6, there is an alternative to var
: const
and let
. There is practically zero need to use var
anymore, so don’t. Any logic that uses var
can always be converted to equivalent const
and let
based code.
As for when to use const
vs let
, I always start by declaring everything const
. const
is far more restrictive and “immutablish,” which usually results in better code. There aren’t a ton of “real scenarios” where using let
is necessary, I would say 1/20 variables I declare with let
. The rest are all const
.
I said const
is “immutablish” because it does not work in the same way as const
in C/C++. What const
means to the JavaScript runtime is that the reference to that const
variable will never change. This does not mean the contents stored at that reference will never change. For primitive types (number, boolean, etc.), const
does translate to immutability (because it’s a single memory address). But for all objects (classes, arrays, dicts), const
does not guarantee immutability.
Arrow =>
Functions
Arrow functions are a concise method of declaring anonymous functions in JS. Anonymous functions describe functions that aren’t explicitly named. Usually, anonymous functions are passed as a callback or event hook.
vanilla anonymous function
someMethod(1, function () { // has no name
console.log('called');
});
For the most part, there isn’t anything “wrong” with this style. Vanilla anonymous functions behave “interestingly” in regards to scope, which can/has resulted in many unexpected bugs. We don’t have to worry about that anymore thanks to arrow functions. Here is the same code implemented with an arrow function:
anonymous arrow function
someMethod(1, () => { // has no name
console.log('called');
});
Aside from being far more concise, arrow functions have much more practical scoping behavior. Arrow functions inherit this
from the scope they were defined in.
In some cases, arrow functions can be even more concise:
const added = [0, 1, 2, 3, 4].map((item) => item + 1);
console.log(added) // prints "[1, 2, 3, 4, 5]"
Arrow functions that reside on a single line include a implicit return
statement. There is no need for brackets or semi-colons with single line arrow functions.
I want to make it clear. This isn’t a var
situation; there are still valid use cases for vanilla anonymous functions (specifically class methods). That being said, I’ve found that if you always default to an arrow function, you end up doing a lot less debugging as opposed to defaulting to vanilla anonymous functions.
As usual, the Mozilla docs are the best resource
Spread Operator ...
Extracting key/value pairs of one object and adding them as children of another object is a very common scenario. Historically, there have been a few ways to accomplish this, but all of those methods are pretty clunky:
const obj1 = { dog: 'woof' };
const obj2 = { cat: 'meow' };
const merged = Object.assign({}, obj1, obj2);
console.log(merged) // prints { dog: 'woof', cat: 'meow' }
This pattern is incredibly common, so the above approach quickly becomes tedious. Thanks to the spread operator, there’s never a need to use it again:
const obj1 = { dog: 'woof' };
const obj2 = { cat: 'meow' };
console.log({ ...obj1, ...obj2 }); // prints { dog: 'woof', cat: 'meow' }
The great part is, this also works seamlessly with arrays:
const arr1 = [1, 2];
const arr2 = [3, 4];
console.log([ ...arr1, ...arr2 ]); // prints [1, 2, 3, 4]
It’s probably not the most important recent JS feature, but it’s one of my favorites.
Template Literals (Template Strings)
Strings are one of the most common programming constructs. This is why it’s so embarrassing that natively declaring strings is still poorly supported in many languages. For a long time, JS was in the “crappy string” family. But the addition of template literals put JS in a category of its own. Template literals natively and conveniently solve the two biggest problems with writing strings: adding dynamic content and writing strings that bridge multiple lines:
const name = 'Ryland';
const helloString =
`Hello
${name}`;
I think the code speaks for itself. What an amazing implementation.
Object Destructuring
Object destructuring is a way to extract values from a data collection (object, array, etc.), without having to iterate over the data or access its keys explicitly:
old way
function animalParty(dogSound, catSound) {}
const myDict = {
dog: 'woof',
cat: 'meow',
};
animalParty(myDict.dog, myDict.cat);
destructuring
function animalParty(dogSound, catSound) {}
const myDict = {
dog: 'woof',
cat: 'meow',
};
const { dog, cat } = myDict;
animalParty(dog, cat);
But wait, there’s more. You can also define destructuring in the signature of a function:
destructuring 2
function animalParty({ dog, cat }) {}
const myDict = {
dog: 'woof',
cat: 'meow',
};
animalParty(myDict);
It also works with arrays:
destructuring 3
[a, b] = [10, 20];
console.log(a); // prints 10
There are a ton of other modern features you should be utilizing. Here are a handful of others that stand out to me:
Always Assume Your System is Distributed
When writing parallelized applications your goal is to optimize the amount of work you’re doing at one time. If you have four available cores and your code can only utilize a single core, 75% of your potential is being wasted. This means blocking, synchronous operations are the ultimate enemy of parallel computing. But considering that JS is a single threaded language, things don’t run on multiple cores. So what’s the point?
JS is single threaded, but not single-file (as in lines at school). Even though it isn’t parallel, it’s still concurrent. Sending an HTTP request may take seconds or even minutes, so if JS stopped executing code until a response came back from the request, the language would be unusable.
JavaScript solves this with an event loop. The event loop loops through registered events and executes them based on internal scheduling/prioritization logic. This is what enables sending thousands of simultaneous HTTP requests or reading multiple files from disk at the same time. Here’s the catch: JavaScript can only utilize this capability if you utilize the correct features. The most simple example is the for-loop:
let sum = 0;
const myArray = [1, 2, 3, 4, 5, ... 99, 100];
for (let i = 0; i < myArray.length; i += 1) {
sum += myArray[i];
}
A vanilla for-loop is one of the least parallel constructs that exists in programming. At my last job, I led a team that spent months attempting to convert traditional R
lang for-loops into automagically parallel code. It’s basically an impossible problem, only solvable by waiting for deep learning to improve. The difficulty of parallelizing a for-loop stems from a few problematic patterns. Sequential for-loops are very rare, but they alone make it impossible to guarantee a for-loops decomposability:
let runningTotal = 0;
for (let i = 0; i < myArray.length; i += 1) {
if (i === 50 && runningTotal > 50) {
runningTotal = 0;
}
runningTotal += Math.random() + runningTotal;
}
This code only produces the intended result if it is executed in order, iteration by iteration. If you tried to execute multiple iterations at once, the processor might incorrectly branch based on inaccurate values, which invalidates the result. We would be having a different conversation if this was C code, as the usage is different and there are quite a few tricks the compiler can do with loops. In JavaScript, traditional for-loops should only be used if absolutely necessary. Otherwise, utilize the following constructs:
map
// in decreasing relevancy :0
const urls = ['google.com', 'yahoo.com', 'aol.com', 'netscape.com'];
const resultingPromises = urls.map((url) => makHttpRequest(url));
const results = await Promise.all(resultingPromises);
map with index
// in decreasing relevancy :0
const urls = ['google.com', 'yahoo.com', 'aol.com', 'netscape.com'];
const resultingPromises = urls.map((url, index) => makHttpRequest(url, index));
const results = await Promise.all(resultingPromises);
for-each
const urls = ['google.com', 'yahoo.com', 'aol.com', 'netscape.com'];
// note this is non blocking
urls.forEach(async (url) => {
try {
await makHttpRequest(url);
} catch (err) {
console.log(`${err} bad practice`);
}
});
I’ll explain why these are an improvement over traditional for-loops. Instead of executing each iteration in order (sequentially), constructs such as map
take all of the elements and submit them as individual events to the user-defined map function. For the most part, individual iterations have no inherent connection or dependence to each other, allowing them to run concurrently. This isn’t to say that you couldn’t accomplish the same thing with for-loops. In fact, it would look something like this:
const items = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
async function testCall() {
// do async stuff here
}
for (let i = 0; i < 10; i += 1) {
testCall();
}
As you can see, the for-loop doesn’t prevent me from doing it the right way, but it sure doesn’t make it any easier either. Compare to the map version:
const items = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
items.map(async (item) => {
// do async stuff here
});
As you can see, the map
just works. The advantage of the map becomes even more clear if you want to block until all of the individual async operations are done. With the for-loop code, you would need to manage an array yourself. Here’s the map
version:
const items = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
const allResults = await Promise.all(items.map(async (item) => {
// do async stuff here
}));
it's really that easy
There are many cases where a for-loop would be just as performant (or maybe more) in comparison to a map
or forEach
. I would still argue that losing a few cycles now is worth the advantage of using a well defined API. That way, any future improvements to that data access patterns implementation will benefit your code. The for-loop is too generic to have meaningful optimizations for that same pattern.
There are a other valid async options outside of map
and forEach
, such as for-await-of
.
Lint Your Code and Enforce a Style
Code without a consistent style (look and feel) is incredibly difficult to read and understand. Therefore, a critical aspect of writing high-end code in any language is having a consistent and sensible style. Due to the breadth of the JS ecosystem, there are a LOT of options for linters and style specifics. What I can’t stress enough is that it’s far more important that you are using a linter and enforcing a style (any of them) than it is which linter/style you specifically choose. At the end of the day, no one is going to write code exactly how I would, so optimizing for that is an unrealistic goal.
I see a lot of people ask whether they should use eslint or prettier. For me, they serve very different purposes and therefore should be used in conjunction. Eslint is a traditional linter most of the time. It’s going to identify issues with your code that have less to do with style and more to do with correctness. For example, I use eslint with AirBNB rules. With that configuration, the following code would force the linter to fail:
var fooVar = 3; // airbnb rules forebid "var"
It should be pretty obvious how eslint adds value to your development cycle. In essence, it makes sure you follow the rules about what is and isn’t good practice. Due to this, linters are inherently opinionated. As with all opinions, take it with a grain of salt. The linter can be wrong.
Prettier is a code formatter. It is less concerned with correctness and far more worried about uniformity and consistency. Prettier isn’t going to complain about using var
, but it will automatically align all the brackets in your code. In my personal development process, I always run prettier as the last step before pushing code to Git. In many cases, it even makes sense to have Prettier run automatically on each commit to a repo. This ensures that all code coming into source control has consistent style and structure.
Test Your Code
Writing tests is an indirect but incredibly effective method of improving the JS code you write. I recommend becoming comfortable with a wide array of testing tools. Your testing needs will vary, and there’s no single tool that can handle everything. There are tons of well established testing tools in the JS ecosystem, so choosing tools mostly comes down to personal taste. As always, think for yourself.
Test Driver – Ava
AvaJS on Github
Test drivers are simply frameworks that give structure and utilities at a very high level. They are often used in conjunction with other specific testing tools, which vary based on your testing needs.
Ava is the right balance of expressiveness and conciseness. Ava’s parallel, and isolated architecture is the source of most of my love. Tests that run faster save developers time and companies money. Ava boasts a ton of nice features, such as built-in assertions, while managing to stay very minimal.
Alternatives: Jest, Mocha, Jasmine
Spies and Stubs – Sinon
Spies give us function analytics, such as how many times a function was called, what they were called by, and other insightful data.
Sinon is a library that does a lot of things, but only a few super well. Specifically, sinon excels when it comes to spies and stubs. The feature set is rich but the syntax is concise. This is especially important for stubs, considering they partially exist to save space.
Alternatives: testdouble
Mocks – Nock
Nock on Github
HTTP mocking is the process of faking some part of the http request process so the tester can inject custom logic to simulate server behavior.
Http mocking can be a real pain, but nock makes it less painful. Nock directly overrides the request
built-in of nodejs and intercepts outgoing http requests. This in turn gives you complete control of the response.
Alternatives: I don’t really know of any 🙁
Web Automation – Selenium
Selenium on Github
I have mixed emotions about recommending Selenium. As it’s the most popular option for web automation, it has a massive community and online resource set. Unfortunately, the learning curve is pretty steep, and it depends on a lot of external libraries for real use. That being said, it’s the only real free option, so unless you’re doing some enterprise grade web-automation, Selenium will do the job.
Alternatives: Cypress, PhantomJS
The Never Ending Journey
As with most things, writing better JavaScript is a continuous process. Code can always be cleaner, new features are added all the time, and there’s never enough tests. It may seem overwhelming, but because there are so many potential aspects to improve, you can really progress at your own pace. Take things one step at a time, and before you know it, you’ll be a JavaScript ace.
This blog post originally appeared on Ryland’s personal website and on Dev.to. You can find more of his writing on both sites. If you would like to contribute articles to the Stack Overflow blog, send an email to pitches@stackoverflow.com.
40 Comments
@rgoldstein What does SO pay for articles? After all, the stackexchange network isn’t a non-profit and you run ads right in the content, so you’re compensating the authors I assume.
We do pay contributors. Send us a pitch and if we think it’s worth publishing, we can discuss a fair fee 🙂
I’m still working through all the debt I incurred from using StackOverflow answers my entire career. 🙂
Jokes aside, contributors are paid and my experience with StackOverflow (and Ben) has been a joy!
Thank you for your javscript technic …
No mention of Puppeteer for web automation?
Puppeteer could have definitely been in there. I honestly only used Puppeteer casually, which is why I didn’t include it in the article.
I also started working for Testim.io – expect to hear more from us in this space very soon :]
We already do some pretty cool stuff there though ATM mostly for big companies.
While the general articles are good, some of the supporting context and examples are somewhat contrived and no balance is offered.
E.g. “if I [want to refactor] a function myAPI that is used by 1000 different services”
That’s a common scenario that already has sound methodologies to implement, you wouldn’t go about it by just changing the function then searching for wherever it’s used. Also, important code should be robust and have its own internal checks – simple “type safety” is but one aspect of good code. ECMAScript’s loose typing can also be very convenient.
Similarly, the statements:
“var has some pretty unique/interesting rules in regards to how it handles scope. The scoping behavior of var is inconsistent and confusing and has resulted in unexpected behavior and therefore bugs throughout the lifetime of JS.”
are made with no proof or examples and are arguably wrong. The scoping of variable declared with var is very simple, and let and const also have possibly unexpected behaviours (e.g. temporal dead zone).
The advice to use const for everything is also arguably not good advice. Using it for everything and only using let for things you know you’ll change the value of (presumably only primitives) destroys the semantics of const, which some prefer to save for real constants like factors (e.g. pi or feet/metres conversion factor). Using const for objects is antithetical to “const”.
Overall they are good articles, you just need to be much more careful with sweeping generalisations that really can’t be supported and should also provide more balance when presenting risks and issues.
PS. There is no “spread operator”. 😉
I feel similar.
TS isn’t a general solution to writing better JavaScript code. The TS source I’ve worked with has it’s own set of gotchas and is often poorly implemented (think “any” everywhere). Using the automatic type conversion to your advantage requires care in implementation, but can also be very useful.
Most of the modern features are pretty great. Template literals, functional functions (map, filter, etc.), spread, maps, generators, iterators, observables, and destructuring are all great additions. Some aren’t. The const vs. let vs. var is not the big deal many make it out to be. Arrow functions are less readable and the different support for the execution context (this) makes the issue more confusing, not less. The class syntactic sugar constructs want to make JavaScript feel seem like something it isn’t, the OO inheritance model vs. prototypical. The only advantage to that is to appease the criticisms from typical server developers that never put in the time to learn JavaScript fully. The “it isn’t JAVA-like so it sucks crowd” led to things like GWT; sorry, I’ll never forgive them:)
Unit testing a UI, beyond the modal portion of an MVC implementation, is largely a waste of time. Many eyeballs will be looking at the state of the UI including the dev, the analyst, and the tester. Testing for CPU and memory leaks/performance is much more worthwhile than writing a unit test to verify the function to change your background actually did so in a headless browser. Using workflow automation to identify regression though has a very high ROI.
Props for mentioning linting and enforcing a style. Simple to do with great benefit.
Devs looking to truly up there JS game should look more into the less used and understood core language features that can improve JS code significantly, like higher order functions, currying, partial functions, memoization, modules, pub/sub, etc.
Of course, all just my opinion.
I agree with you about the advice for using const everywhere.
My previous project used `let` everywhere and use `const` only for constant values (value that is set literally to be used everywhere). But now, my current project use `const` everywhere. I have been doing it for 2 months and already type `const` automatically when creating variable. But I haven’t feel the benefit of it. but need to type more. reading `const` makes me feel inconsistent between variable names and real constants. I don’t have a mental model that `let` tells me the variable is going to change. So, it just means I need to hold a value to be used later and `let` the `variableName` hold it. This is how I read it.
When I keep my function small, most of the re-assignment I found are usually values which depend on certain other values which is able to be wrapped to a function and simply pass the other values that it depends as parameter, which also leads to a cleaner code. this article https://zellwk.com/blog/dont-reassign/ describe what I mean, but with `let`. For me, since everything doesn’t mean to change, `const` doesn’t help but makes me type longer and get mixed up with the real `const`.
The only benefit of using `const` I found is to force me to move the re-assignment to another function, but it doesn’t justify the cons I face.
I’d like to blog for you. I’d prefer to send my ideas to a non-public email though. Is there an address I can email to?
Hi Ben!
You can hit us up at pitches@stackoverflow.com, as mentioned at the end of the article. The email goes directly to us editors here, and now we know you’re coming :).
Can I use an async-await function instead of callback hell!
> Use Typescript
Depends on the use case and the team you’re working with.
From my experience (and the experience of assorted developers on LinkedIn I queried), it’s good for teams that are working on heavily OOP-style projects (like an Angular app) and/or are not as privy to JS type craziness. For more veteran JS developers, it just adds overhead and can actually make project migration more difficult. Not to say that they’re against type-checking, just not with something as overarching.
I also found that people either don’t really care about it or care to use it, or they REALLY LIKE IT.
Where are Scss/Sass and Less and Vue.js and karma and Backbone.js ? I agree on the most anyways thumb up.
Using typescript, a tool, a framework and such is not a better way to write javascript. This is not about javascript “as-is” and how to do it right with “as-is”. Title is misleading and article coloured.
I also think TS is good but it has higher learning curve and syntax is not beginner friendly. TS update frequently and it is pain to update your project again and again.
I am happy without it and Using JS Constructors can help prevent Bugs.
Am I the only programmer on earth who programs by myself without a team?
I have been programming with web languages for about 10 years. The most common pattern I see among these types of articles and tutorials, is instructions and advice for team related programmers. That includes stackoverflow itself.
That being said, I want to talk about the new features in the new version of ECMA Script…
const and let:
The new keywords const and let have been getting a lot of reviews lately. A lot of programmers I read from hail them as saviors. After much debate with these programmers, they have made it very clear to ME, that these new keywords are specifically designed to aid those that share their scripts with fellow scripters. They cried, ECMA delivered. For me personally, I failed (with an open mind and much consideration) to find a use case in my day to day tasks. I do not share my scripts, so ALL of my code is managed by me solely. With a little planning ahead, I VIRTUALLY never run into the problems/bugs I see this article expressing. Keep in mind, I hand write in Notepad++ and do not use any additional software or plug-ins. Not a single one, as long as you don’t count my virtual offline server.
Now to talk about var… My opinion is that, if you ALWAYS use var and you ALWAYS treat every declaration as a var, then your fellow teammates will not have to worry about whether or not it is a const or let variable. Having to weed through armies of scripts to determine that is time concuming. Just assume a var and you are always safe. TIME SAVED. Also when you consider keyword searching features from text editors, you save a few clicks there too.
Before I move on to the next feature, let me make a point which I will make again later, and is the focus of my reply… To me, these new features reek of the phrase ‘I’m too lazy to code properly so let’s have ECMA find solutions for us”.
Creating new features this way runs us into a “catch 22”, in the long run, for two reasons:
1. Backwards compatibility.
2. Training experienced programmers
To me, re-learning the most fundamental aspects of a language is both redundant and defeats the purpose of our ultimate goal: saving time.
With every attempt to save time, we in turn, spend time. I just read this article and wrote this reply. I really should be working on my current project and yet here I am, spending time. ECMA spent countless hours solving everyone’s lazy syntax practices, rather than addressing the real issue; bad programming practices.
A side on helper software:
Whenever I have to learn a new program, I cringe, because there goes a huge portion of my time. I could probably spend the rest of my life learning how to use software that helps me write software! There are countless programs out there! That is a complete waste of my precious life! I would rather be spending my life doing the thing I love so much, writing Javascript. I got into writing code because for me, it was free. It did not cost me anything but my time. I was raised very poor and can not financially keep up with all the software releases that come out every minute.
Back to features…
Anonymous functions:
Anonymous functions may be difficult for beginners to understand. I was a beginner once, so I understand the desire to do something about it. That said, the “cleaner syntax” arrow functions provide, really is not all that much cleaner and is harder for a beginner to understand. The keyword function makes a hell of a lot more sense to me (a human) then does a couple of brackets.
This was one of the fundamental reasons why I was so afraid to learn JS in the first place. All those symbols looked like greek to me. Now show me some keywords and I am interested. When you also consider that software like Notepad++ color codes keywords for you, you end up starting to like them a hell of a lot more than symbols. Interestingly, I spent my first couple of years programming using nothing but windows notepad and a browser. Sometimes simple is better. No hassle, no complexity, and saves me thousands of clicks on the mouse. Remember our goal?
I find it completely frustrating to have to learn any software that does my job for me. Let me write the code. If I want help, it will be with aesthetics, not functionality. Hence why I switched to Notepadd++ from windows notepad. Most of Notepad++’ features I rarely use.
The point here with my ramble, is that these new features reek of “help me solve my buggy code”. When it should be “help me write more readable code”. For those in the team department, this is paramount. Writing better code saves EVERYONE time, including the end-user.
Let me finish with discussing a little about my experiences on stackoverflow. Since I am a one-man show, most of my approaches to my script might seem unorthodox, maybe they are, that is subjective though. However, they are my designs, and I sometimes need help, rarely, but they do arise here and there. That said, I often run into programmers who do not have a solution for me, or are too confused about my design, and I end up devising my own solution.
So to sum, again, creating features in ECMA Script to solve bad programming is a trendy thing to do. I say trendy because there will come a time when these features are obsolete, thus creating time wasted; the most valuable asset in the universe.
P.S. To give constructive criticism, your article lacks citations. I would be interested to know where you got a lot of your statements about what programmers do and don’t do come from. I can assure you that if I exist, with my unusual approaches to programming, then there are others as well.
I want to add a neat little trick that I learned on using anonymous functions that helps to write cleaner, more readable code, but also helps beginners to understand the concept. It certainly helped me understand it better.
The concept is simple, instead of declaring your anonymous function’s syntax in your argument list, you can simply store the anonymous function in a variable. Something I was not aware I could do for many years. Let me show you an example:
var anonFunc = function() { do stuff here } ;
Then you can just simply include the anonymous function, using a variable name, in your argument list. Super clean and readable!
element.addEventListener( “click” , anonFunc ) ;
In my opinion, this approach helps you write clean, human-readable code. The bonus is that beginners can see how anonymous functions are not only important in certain use cases, but also demonstrates the difference between anonymous functions and regular ones.
As an added bonus, now you can both store anonymous functions in variables, objects, and arrays, but you can also make them global.
When I read “that´s why I switched to notepad”….
Probably I am wrong with you, but i think you only write simple programs and the time doesn´t matters for you.
I program with big team in the work, but also I program in my house alone.
Writing plain JS in notepad vs TS + Vscode, WebStorm… is like the day and the night, you are much more efficient, your programs scale better, refactoring is not a nightmare, console detect errors for you and if you want you can help IDE with types and it will helpyou being more efficient.
One project with 15-30 files is easy to maintain even with paper.
I am also using VSCode and it is awesome. For javascript developer, VSCode is a must and Microsoft has done a very good job by open sourcing it.
Well, Jeff, you are not alone. I also program individually from last couple of years. It’s a really good feeling though and one thing for sure is that Vs-code helps a lot no matter in what programming languages you are working on.
Rule #1: dont
/s
Thanks for the read. PhantomJS has been dead and deprecated for quite a while now – please consider editing it out.
Huh, didn’t know about the concurrency issues with traditional for loops; that would explain a few things….
Great read, thank you for that! As for Web Automation I’d highly recommend WebdriverIO – free, well developed and a bit similar to Selenium (i.e. uses WebDriver protocol).
Web Automation has Puppeteer, which is native nodejs and has it’s own special version of Chromium.
You can also run Chrome, Firefox and Edge with Puppeteer.
Thank you for a great article.
I enjoyed reading it, and I think it is a good article for those who want to pick up on some great hints and tips.
My favorite part of this article was the following statement about arrow functions:
“Aside from being far more concise, arrow functions have much more practical scoping behavior. Arrow functions inherit this from the scope they were defined in.”
As mention in the comments, some more in depth explanation of certain statements, such as “var has some pretty unique/interesting rules” would be appreciated, along with some examples would also be great. If not explained in this article, maybe you could link to another article explaining it in depth.
Again, thanks for sharing.
Thanks for the article! There is a misleading statement regarding the async-await functionality:
‘The one caveat being, anything you await must have been declared async:’
The statement is false. The function whose body contains the ‘await’ keyword must have the ‘async’ keyword in its declaration. The function that is awaited does not have to have the ‘async’ keyword, unless its body contains ‘await’ statements.
Agreed. The basic example of this is –
async function myFetch() {
let response = await fetch(‘coffee.jpg’);
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
return await response.blob();
}
Here, you cannot use await, if the function wasn’t async.
Some good ideas here, but from the perspective of lots (lots) of experience in the JS world, I see a significant portion of the advice to be along the lines of “use the latest stuff.”
High quality applications have been, can be, and will continue to be, written in plain old JS. There is more than enough features in the language to write strong, well thought out code. Classes, async functions, promises, arrow functions, etc. etc, not to mention what’s on the board for the next versions (annotations etc). Throw RxJS into the mix and you’re really off and running.
Again, I’ve been using JS for a long, long time. When the wheel turns, and it always does turn, JS has always been the last man standing, every single time. So many other “save us from JS!!!” efforts have come and gone (as well as the religious choruses that supported them), I can’t even remember them all anymore.
My advice: sure, know TS, and use it. It’s a marketable industry standard. But all that aside, KNOW THY JS. Don’t shove it aside. Watch the standards, make sure you know whats coming, and know how to use it. That’s makes your package complete.
“Use Typescript” could also mean use Coffeescript, Haxe, Dart or whatever other compile-to-javascript languages are out there. I use Haxe and love it.
Bonjour à tous, j’ai besoin d’aide Mon problème c’est que j’ai un tableau et à partir de ce tableau je dois créer un composant graphique par exemple j’ai un tableau entrez la description de l’image ici et je dois construire un composant graphique qui est un axe avec des cercles à gauche et à droite commenter le faire en javascript? Et merci
Thanks for the read. PhantomJS has been dead and deprecated for quite a while now – please consider editing it out.
if I have a void method. Should I add a void for further clarification of not returning a type or is enough to have the method without the return type?
Would you advice for never using var on JS? Kyle Simpsons (YDKJS) argues that let and const have to be complementary tools but in any way a replacement of var.
Hello,
I’ve never used *Nock* library, but I tried Miragejs (https://miragejs.com/). It seems worthy.
Thanks for the article 👍🏻
I come from a Python background and would try to write a lot of for loops in javascript and have only recently become comfortable with using forEach and map methods. Thank you for the section on traditional loops vs these array methods!
“This code only produces the intended result if it is executed in order, iteration by iteration. If you tried to execute multiple iterations at once, the processor might incorrectly branch based on inaccurate values, which invalidates the result.”
Which would never happen because we’re writing JS, which is single threaded. We’d always get the intended result (for your example).