It’s Always Been Broken

Sometimes, when debugging an error that has just popped up in a program, the problem lies in some external parameter that has recently changed. Most of the time, however, the thing that has changed only serves to reveal how some static code had a dormant bug already. The code can work under one set of parameters but not another. Either way, it has always been broken.

It’s The Little Things That Get You

I spent some time today dealing with a layer of data that was added to an ESRI REST service. The service layers are zero base indexed which means you can reference them based on their position relative to each other. The first layer is identified at position zero while the second is 1, the third is 2 and so on.

The new layer was just before the fourth existing layer. This means the new layer became the fourth, the fourth became the fifth and the fifth became the sixth. Are you following along?

 

Anyway, I was working on a web app that consumes this service and it references the service layers based on those index positions. I added some new markup and some code for the added layer and changed the existing code to reference their two new position numbers. At first everything seemed to be working fine. Then a coworker noticed that the new layer’s data was showing on screen by default (you’re supposed to check a box first). It took me a while before I finally realized that it was being controlled by a different checkbox that controls the layer right after it in the service.

To make an already long and confusing story shorter, there was a second reference to the service indexes in my code. When I had designed the app a few years ago I, for some reason, found it expedient to make two, hard coded references to the index. Instead of taking the time to create a variable, I had repeated a hard coded number that could be changed by an outside source.

My poor design choice came back to bite me. Not only did I repeat code, I made it confusing to update. It’s embarrassing to admit that I wrote code that even I couldn’t figure out later. But it goes to show that taking the time to design things correctly from the start will pay off in the future. It’s the little things that get you; that second hard coded index number that’s nearly impossible to notice.

Future Proofing a Technology Career

Is it possible to “future proof” a career in technology?  An article yesterday by IDG Connect proposes that to future proof your technology career you should learn “cyber security, business intelligence, data science/big data, DevOps, JavaScript and UX/UI development and design”.

Their suggestions are based on the fact that these domains are in high demand right now. But the must have technology today may be the forgotten technology of tomorrow. Basing your career on a single language or specialty almost ensures your skill set will become obsolete.

Specializing in a given tech field or becoming proficient in a particular programming language isn’t bad. But you shouldn’t base your entire career on it. A better approach is to gain broad knowledge in computer technology or programming and supplement that with specific expertise. That way when the language of the day changes or the cyber security field is saturated with employees, you’ll be able to shift to the new tech need easier and with more authority.

But there might be an even better way to avoid technology skills obscurity. In his book You Can Do Anything, author George Anders suggest that the key to securing a long lasting place in the work world is to develop skills that might be completely unrelated to what you think you should learn. The book’s premise is that a liberal arts degree could be the key to securing high paying jobs in a number of career fields including technology.

Anyone can learn a programming language. Anders says it’s “nothing that can’t be picked up in a few months of concentrated effort”. But it takes a different skill set to think creatively and apply technology training to the problems companies face. Liberal arts degrees can give you those different skill sets.

They can help you develop creative and critical thinking, communication and obtuse analysis skills among other things. My own political science degree solidified my analytical abilities and taught me how to look beyond seemingly obvious answers to problems and find solutions with more permanent outcomes. And it was my interest in political theory and intelligence that led me to a career in geographic information systems.

Having a computer science degree doesn’t doom you. Science degrees still produce some of the primary skills tech employers are looking for. But if you want to guarantee your place in today’s ever changing world of technology, hone your soft skills. Carve your own niche by looking outside of technology for the skills businesses are looking for in their best employees.

Objective Reasoning – The Basic Javascript Object

What is an Object?

In Javascript an object is simply a container or store of properties that are related to what the object is modeling. These properties can be primitive data types, other objects or functions. Here’s an example of an empty object with no properties:

const furBearingTrout = {};

I’ve used object literal notation which is doing just what it sounds like – literally writing out the notation of an object. We do this using curly braces {}.

How to create properties

When we want to set a property on our object we can do it one of two ways:
1. Use bracket notation where we state the object name along with a property name in brackets and then assign a value using the equal sign:

furBearingTrout[“name”] = ‘Alpino-Pelted’;
furBearingTrout[“url”] = 'http://www.furbearingtrout.com/fish2.html';

2. Use dot notation where we state the object name followed by a dot followed by the property name we want to use and then assign a value using the equal sign:

furBearingTrout.name = ‘Alpino-Pelted’;
furBearingTrout.url= 'http://www.furbearingtrout.com/fish2.html';

I could have created our object with properties already assigned. Note that properties are written as name:value pairs separated by commas.

const furBearingTrout = { 
  name: ‘Alpino-Pelted’, 
  url: 'http://www.furbearingtrout.com/fish2.html'
}

How to access properties

Once we have properties we can access them using the same two methods we used to set them just without the equal signs that assign values.

console.log(furBearingTrout.name); //outputs ‘Alpino-Pelted’
console.log(furBearingTrout[‘url’]); //outputs 'http://www.furbearingtrout.com/fish2.html'

We can also create objects using the constructor method (new Object) or the Object.create() method but for this article we’ll stick to the literal notation for its simplicity and visual aid.

Functions in objects are methods

When the property is a function we call it a method. Methods do something with the data stored in the object. Imagine we had the following properties:

const furBearingTrout = {
  name: ‘Alpino-Pelted’,
  url: 'http://www.furbearingtrout.com/fish2.html',
  view: function () {
    console.log(`View the ${this.name} trout at ${this.url`;)
    }
}

You call a method by accessing the property name it’s associated with followed by parentheses.

furBearingTrout.view(); //ouputs ‘View the Alpino-Pelted trout at http://www.furbearingtrout.com/fish2.html

Why do I use ‘this’ ?

The ‘this’ keyword simply references the very object it’s inside of. So this.name is the same as saying neighbor.name. Outside of the object we would refer to furBearingTrout.name but inside the object we refer to this.name.

There’s a lot more to objects than what I’ve written here. But if you understand these basics you’ll already be able to model some fairly complex real world data and be able to manipulate it.

The Integrated Developer’s Environment

I was home sick from work the other day and had plenty of time to think. It occured to me that whenever I go in to the office I am very productive almost immediately. When I’m home, I tend to have little enthusiasm for programming. When I attempt to program at home I’m usually nowhere near as productive as at the office.

It hasn’t always been that way. When I lived in Las Vegas I wrote code at home on a regular basis. My immediate thought was that I don’t currently have the right hardware to be productive at home. In Las Vegas I had a decent desktop PC with two monitors and a hardwired internet connection. I currently have an old, slow laptop that shares a wireless connection among several people and lots of devices.

But is it really the hardware that holds me back? Most of my work is done in JavaScript which can be written with lightweight text editors/IDEs (I use Atom these days). I don’t need a lot of computing power for that. Honestly, my high-end rig at work is mostly for large imagery datasets and working with GIS.

I started realizing that it’s less the hardware and software that impacts my productivity and more the intended use of those things. When I go to the office, I use that computer for work. My home computer is used for surfing the web, writing emails and watching Netflix. When I sit down at it, my brain switches to mindless mode. I find myself wandering, checking email or googling things that pop into my mind.

So it’s less the development environment and more the developer’s environment that inluences his productivity. In my Las Vegas home I had set up my computer in a separate room and only really used it for programming. We had a separate laptop (the same one I have now) for web surfing and intertainment. For me, I have to have a psychological, if not physical, separation between work (anything that takes concentration and thought) and play. I would like to spend some time working on some open source projects on the weekends so it looks like I’m going to have to carve out some space in the house or a dedicated office.

Chrome Developer Tools

As a JavaScript developer I need tools that help me figure out what’s going on between my code and the browser. Thankfully, most major browsers today provide developer tools that do just that.

 

Google Chrome Toolbox

 

With these tools you can see exactly how your code affects the browser. At runtime you can find errors in code you’ve written or how long your site takes to load. You can view or even rewrite your CSS rules to see what changes will look like before you ever even commit them to your source files.

 

You can also dig into the browser itself and inspect its cookies, local storage and cache. And with web users quickly transitioning to mobile devices, developer tool device emulation can show what your site looks like on, and how it will interact with, phones and tablets.

 

Every year that passes gives us better developer tools from the major browser makers. We are fast coming to a point where any tool you choose will be just as good as another. But for one reason or another developers tend to find themselves gravitating towards a particular set of tools.

 

When I surf the web I like to use more privacy oriented browsers like, well, anything but Google’s Chrome. But when it comes to debugging and developing code, Chrome takes first place in my world. I like the default look of the Chrome tools UI (although Mozilla’s Dark theme is slightly more pleasant to look at if you’re into dark themes). I also find Firefox developer tools to be a little slow when emulating mobile devices while Chrome is snappier. Other tools in Chrome also appear more polished and have more functionality.

Some browser developer tools might have features that others don’t but that’s usually only true until the others release their next version. Good ideas tend to spread themselves around quickly.

 

There are lots of tools out there for JavaScript developers and web designers. But Chrome’s developer tools provide great runtime debugging, design assistance and performance insights. If you’re a web developer and are not using these tools and features to the fullest, it’s worth taking the time to dive deep.

How to Change a Github Repository Language

I created a new Github repository today for a Node/Express project at work. After pushing the project code I went to Github and saw that the language for the project was listed as CSS. To be fair to Github, I did style my app with CSS. But as it’s a Node app, I expected to see the JavaScript tag instead.

It turned out the third-party image gallery library I was using had much larger files than anything I was writing. Github’s Linguist library picked up on the larger files and used those to extrapolate CSS as the dominant technology in the app. I still don’t entirely understand why, since the library’s JavaScript files were three times the size of its CSS files.

Now I needed a way to change what the language tag said. Unfortunately, Github doesn’t give you a good way to do this. The Linguist library does give you options to ignore files from third parties though. Here’s how you do it:

  1. Create a .gitattributes file at the root of your local repository.
  2. Inside the .gitattributes file, type a path to the containing folder holding your third-party code. At the end of the path type “/*”. 
  3. After the path type “linguist-vendored”. Here is the example from the Linguist troubleshooting section: 
    special-vendored-path/* linguist-vendored

    Save your file, commit it and push it to your remote Github repository.

This takes the third-party code out of consideration for the Linguist algorithm. Once you refresh your Github page the language tag should be different. If the language still doesn’t match what you think it should, try adding the “linguist-vendored” tag to other folders to reduce the types of files Linguist searches.

Use Yarn in Place of npm

Condensed version of This Post

Use Yarn in place of npm: Workflows don’t change; Packages load faster; Consistent node_module structure.

yarn init = npm init
yarn install = npm install
yarn add [package] = npm install [package] --save
yarn add [package] --dev = npm install [package] --save-dev
yarn remove [package] = npm uninstall [package]

 

Longer Version of This Post

npm is currently king of the Node package managers. Yarn is an alternative package manager that tries to fix what could be problems for some npm users. Yarn provides faster load times, dependency consistency and shorter commands, all within the same workflow you are used to with npm.

 

Installation and Use

If you already use npm, install Yarn with npm install yarn -g . That’s it! You can now use the yarn commands just like you would with npm. If you feel silly installing npm’s replacement with npm you can download an installer instead. Use your existing package.json file or create a new one with yarn init . Run yarn add [package]  to install new package dependencies . Removing installed packages is as easy as yarn remove [package] . Install all of the dependencies of an existing project using yarn install,or even just yarn .

 

Deterministic Package Installs

Deterministic Package Installs is a fancy way of saying: The same module dependencies will be installed with the same structure on any machine using yarn. The structure of dependencies in the node_modules directory can be different from machine to machine when using npm. This can potentially cause a dependency to work on one machine but break on another.

Speed

Yarn installs packages faster than npm. Yarn starts by comparing a dependency against what’s already in the global yarn cache. If there’s no package cache, the package is downloaded and placed in cache. Once all dependencies are cached, yarn copies all necessary files only once to the project’s node_modules directory.

 

Downloaded and cached packages don’t need to be re-downloaded in the future. If you nuke your node_modules folder and run yarn install  again, your dependencies will be copied from the cache into your new node_modules directory very quickly. If you start a new project somewhere else on the same machine, only dependencies that have never been used elsewhere are downloaded. The rest are pulled from the cache and merged with the downloaded ones. This makes for a very fast load.

Conclusion

Do you really need to use Yarn? Of course not. Lot’s of people use npm for their projects with little problem. But on projects where dependencies have to be installed separately among several users, module consistency could become a problem. Yarn solves this and provides other great enhancements to npm. Yarn provides a similar use experience to npm. It provides all the same packages, is faster and has simpler commands. It can even tell you why a package is being used. There are few if any downsides and you can always go back to npm.

5 Ways to Comment Your JSON

Comments aren’t part of the official JSON specification. According to an old (2012) Google Plus post by Douglas Crockford, he removed them to preserve interoperability.  But that same post suggests you can still use comments  so long as you remove them through minification before parsing.

There are a few other ways to handle JSON comments besides minification:

  1. You can add a new data element to your object. The element key would be named  “_comment_” or something similar. The value would be the actual comment. This method is slightly intriguing but feels kind of dirty. It looks like a hack. It is a hack! bulk is also added to the network payload. JSON is a lightweight data exchange. Adding comment elements takes away from this.
  2. Use Scripts that programmatically remove comments from your JSON before it’s parsed. Sindre Sorhus published a comment stripping module which does just that. This is similar to Crockord’s method of minification in that it removes the comments before parsing but you can inline it in your code rather than use it during a build process.
  3. You can forget comments in your JSON entirely. Put comments in the code where you make the data request in the first place. You should already know what kind of returned data is expected so comments would make sense here. You can stay in your code without the need to view a separate file. This makes your code easier to understand.
  4. Finally, if JSON is being used as a configuration file or some other static data store, you might even try commenting it in a separate file. Put a text README in the same directory the configuration file is stored. The README could contain a paragraph describing the data or you could copy the JSON into the README and use inline comments.

There are several ways to take care of the problem of commenting JSON files. All have their strengths and weaknesses. The best method depends on your particular situation and needs.

Why Gulp is Great

In my last post I talked about why I started and then stopped using Grunt. Basically, Grunt seemed too slow and my workflow was being halted too often while I waited for it to build. There are several other task running/app building tools out there (Broccoli, Cake, Jake…) but I decided to try Gulp first since it has a large user base and there are plenty of plugins out there to keep me from having to think too much.

At first, Gulp didn’t seem quite as straightforward as Grunt. Grunt was easy to use. You just had to write (sometimes lengthy) configuration objects for the plugins you wanted to run and then fire off the tasks using the command window. Even someone like me could figure out how to add a source file and a destination location to a minification plugin and be reasonably sure I would get a minified file out of it.

It was also very easy to visualize what your Gruntfile was doing because every task plugin worked independently of the rest. You would configure ten different tasks and then register them all together in a row and expect them to run one after another until they all completed.

With Gulp, you don’t just configure plugins, you write JavaScript code to define your tasks and how you want them run. A Gulp task asks you to require the plugins you want to use, or write a custom task using plain old JavaScript, then call Gulp.src to provide a source file for the tasks to run on. Doing this opens a Node stream which keeps your source object in memory. If you want to run one of the task plugins you required at the top of your script, you simply pass the in-memory object to it by calling the .pipe() method. You can continue piping the object from one task to another until you’re finished. Finally, you call Gulp.dest and provide a destination location.

var gulp = require('gulp');
var plumber = require('gulp-plumber');
var addsrc = require('gulp-add-src');
var less = require('gulp-less');
var cssnano = require('gulp-cssnano');
var concatCss = require('gulp-concat-css');
var rename = require('gulp-rename');
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');
var watch = require('gulp-watch');

gulp.task('less', function(){
    return gulp.src('./source/style/style.less')
        .pipe(plumber())
        .pipe(less())
        .pipe(cssnano())
        .pipe(addsrc.append(['./source/style/anotherStyleSheet.min.css', './source/style/stillAnother        StyleSheet.min.css']))
        .pipe(concatCss('concat.css'))
        .pipe(rename("style.min.css"))
        .pipe(gulp.dest('./destination/style/'));
});

gulp.task('js', function(){
    return gulp.src(['./source/scripts/javaScript.js'])
        .pipe(plumber())
        .pipe(uglify({
            mangle: false,
        }))
        .pipe(addsrc.prepend(['source/scripts/someJSLibrary.min.js', 
        'source/scripts/anotherJSFile.min.js','source/scripts/stillAnotherJSFile.min.js']))
        .pipe(concat("all.js"))
        .pipe(rename("finalFile.min.js"))
        .pipe(gulp.dest('./destination/scripts/'));
});

gulp.task('default', ['less', 'js'] , function() {
gulp.watch(['./source/style/style.less']);
gulp.watch(['./source/scripts/javaScript.js']);
});

The great thing about using Node streams is that you don’t have to keep opening and closing files for each task like in Grunt. This lack of i/o overhead makes running a series of tasks very fast. Even so, you really need to use the built-in watch task to take advantage of this speed. In my experience, running a default task with four or five tasks in it, from the command line, was almost as slow as in Grunt. With the watch task running, it only took milliseconds to rebuild what it needed to. But I’m new to Gulp so what do I know?

You can see in the code above that I used several plugins to manipulate the input file as it is piped down the stream. There are two that I found particularly helpful. The first is Gulp-Plumber which is basically a patch that keeps streams from being un-piped when an error is encountered. Supposedly, streams breaking on error will be fixed in version 4.0.

The second helpful plugin here is Gulp-Add-Src which does exactly what the title says. You can add additional source files to your stream so you can do neat things like concatenation. With these and other plugins I haven’t found anything with Gulp that would keep me from doing everything I could with Grunt.

The only thing I really don’t like about Gulp is the icon. It’s a cup with a straw in it and the word Gulp across its side. A cup by itself indicates an ability to gulp what is in it. But you don’t gulp through a straw, you sip or suck. Who wants their product to suck? And sip indicates a lack of passion. So what’s with the straw?

Gulp.js cup icon