Time to Start Thinking of Gardens

It’s that time of year again. The time when I wait too long to plant my cool weather garden. I then wait too long to plant my warmer garden. All of that comes after I’ve already forgotten to amend the soil properly for the particular plants I want to grow. But hey, every year’s a new year. Even though I should have already sown my first round of carrots, radishes, lettuce and peas, I can still get them in a little late. And there’s always Fall and early Winter.

 

My biggest goal this year is to get a harvest of winter squash off the vine and onto my plate. Last year’s crop was absolutely decimated by squash bugs. I had amazing plants with beautiful leaves but I let the squash bugs get established. They destroyed everything. It’s hard to see vibrant plants start to put out fruit only to see everything killed by little vine-boring punks.

I’m determined that this year will be different. I plan on putting in fewer plants and defending them to the death against the insidious squash but. There are lots of great ideas in books and on the internet of ways to kill or deter them organically (the only way I grow food plants). One can try Castile soap sprays, diatomaceous earth, row covers, traps, companion plants, oils and others. I’m willing to try them all. The ultimate, of course, would be to design a garden defense system that uses computer vision and laser beams to blast bugs. I might need to put a little more thought into that one.

I love Kabocha, Acorn, Delicata and Spaghetti squash. I’d love them even more if they came from my garden rather than the store’s shelves. With a little care and attention, along with a healthy dose of bug violence, I might be able to make it happen this year. Now I just have to go put it all on the calendar so I don’t forget to actually do it.

Chrome Developer Tools

As a JavaScript developer I need tools that help me figure out what’s going on between my code and the browser. Thankfully, most major browsers today provide developer tools that do just that.

 

Google Chrome Toolbox

 

With these tools you can see exactly how your code affects the browser. At runtime you can find errors in code you’ve written or how long your site takes to load. You can view or even rewrite your CSS rules to see what changes will look like before you ever even commit them to your source files.

 

You can also dig into the browser itself and inspect its cookies, local storage and cache. And with web users quickly transitioning to mobile devices, developer tool device emulation can show what your site looks like on, and how it will interact with, phones and tablets.

 

Every year that passes gives us better developer tools from the major browser makers. We are fast coming to a point where any tool you choose will be just as good as another. But for one reason or another developers tend to find themselves gravitating towards a particular set of tools.

 

When I surf the web I like to use more privacy oriented browsers like, well, anything but Google’s Chrome. But when it comes to debugging and developing code, Chrome takes first place in my world. I like the default look of the Chrome tools UI (although Mozilla’s Dark theme is slightly more pleasant to look at if you’re into dark themes). I also find Firefox developer tools to be a little slow when emulating mobile devices while Chrome is snappier. Other tools in Chrome also appear more polished and have more functionality.

Some browser developer tools might have features that others don’t but that’s usually only true until the others release their next version. Good ideas tend to spread themselves around quickly.

 

There are lots of tools out there for JavaScript developers and web designers. But Chrome’s developer tools provide great runtime debugging, design assistance and performance insights. If you’re a web developer and are not using these tools and features to the fullest, it’s worth taking the time to dive deep.

How to Change a Github Repository Language

I created a new Github repository today for a Node/Express project at work. After pushing the project code I went to Github and saw that the language for the project was listed as CSS. To be fair to Github, I did style my app with CSS. But as it’s a Node app, I expected to see the JavaScript tag instead.

It turned out the third-party image gallery library I was using had much larger files than anything I was writing. Github’s Linguist library picked up on the larger files and used those to extrapolate CSS as the dominant technology in the app. I still don’t entirely understand why, since the library’s JavaScript files were three times the size of its CSS files.

Now I needed a way to change what the language tag said. Unfortunately, Github doesn’t give you a good way to do this. The Linguist library does give you options to ignore files from third parties though. Here’s how you do it:

  1. Create a .gitattributes file at the root of your local repository.
  2. Inside the .gitattributes file, type a path to the containing folder holding your third-party code. At the end of the path type “/*”. 
  3. After the path type “linguist-vendored”. Here is the example from the Linguist troubleshooting section: 
    special-vendored-path/* linguist-vendored

    Save your file, commit it and push it to your remote Github repository.

This takes the third-party code out of consideration for the Linguist algorithm. Once you refresh your Github page the language tag should be different. If the language still doesn’t match what you think it should, try adding the “linguist-vendored” tag to other folders to reduce the types of files Linguist searches.

Use Yarn in Place of npm

Condensed version of This Post

Use Yarn in place of npm: Workflows don’t change; Packages load faster; Consistent node_module structure.

yarn init = npm init
yarn install = npm install
yarn add [package] = npm install [package] --save
yarn add [package] --dev = npm install [package] --save-dev
yarn remove [package] = npm uninstall [package]

 

Longer Version of This Post

npm is currently king of the Node package managers. Yarn is an alternative package manager that tries to fix what could be problems for some npm users. Yarn provides faster load times, dependency consistency and shorter commands, all within the same workflow you are used to with npm.

 

Installation and Use

If you already use npm, install Yarn with npm install yarn -g . That’s it! You can now use the yarn commands just like you would with npm. If you feel silly installing npm’s replacement with npm you can download an installer instead. Use your existing package.json file or create a new one with yarn init . Run yarn add [package]  to install new package dependencies . Removing installed packages is as easy as yarn remove [package] . Install all of the dependencies of an existing project using yarn install,or even just yarn .

 

Deterministic Package Installs

Deterministic Package Installs is a fancy way of saying: The same module dependencies will be installed with the same structure on any machine using yarn. The structure of dependencies in the node_modules directory can be different from machine to machine when using npm. This can potentially cause a dependency to work on one machine but break on another.

Speed

Yarn installs packages faster than npm. Yarn starts by comparing a dependency against what’s already in the global yarn cache. If there’s no package cache, the package is downloaded and placed in cache. Once all dependencies are cached, yarn copies all necessary files only once to the project’s node_modules directory.

 

Downloaded and cached packages don’t need to be re-downloaded in the future. If you nuke your node_modules folder and run yarn install  again, your dependencies will be copied from the cache into your new node_modules directory very quickly. If you start a new project somewhere else on the same machine, only dependencies that have never been used elsewhere are downloaded. The rest are pulled from the cache and merged with the downloaded ones. This makes for a very fast load.

Conclusion

Do you really need to use Yarn? Of course not. Lot’s of people use npm for their projects with little problem. But on projects where dependencies have to be installed separately among several users, module consistency could become a problem. Yarn solves this and provides other great enhancements to npm. Yarn provides a similar use experience to npm. It provides all the same packages, is faster and has simpler commands. It can even tell you why a package is being used. There are few if any downsides and you can always go back to npm.

5 Ways to Comment Your JSON

Comments aren’t part of the official JSON specification. According to an old (2012) Google Plus post by Douglas Crockford, he removed them to preserve interoperability.  But that same post suggests you can still use comments  so long as you remove them through minification before parsing.

There are a few other ways to handle JSON comments besides minification:

  1. You can add a new data element to your object. The element key would be named  “_comment_” or something similar. The value would be the actual comment. This method is slightly intriguing but feels kind of dirty. It looks like a hack. It is a hack! bulk is also added to the network payload. JSON is a lightweight data exchange. Adding comment elements takes away from this.
  2. Use Scripts that programmatically remove comments from your JSON before it’s parsed. Sindre Sorhus published a comment stripping module which does just that. This is similar to Crockord’s method of minification in that it removes the comments before parsing but you can inline it in your code rather than use it during a build process.
  3. You can forget comments in your JSON entirely. Put comments in the code where you make the data request in the first place. You should already know what kind of returned data is expected so comments would make sense here. You can stay in your code without the need to view a separate file. This makes your code easier to understand.
  4. Finally, if JSON is being used as a configuration file or some other static data store, you might even try commenting it in a separate file. Put a text README in the same directory the configuration file is stored. The README could contain a paragraph describing the data or you could copy the JSON into the README and use inline comments.

There are several ways to take care of the problem of commenting JSON files. All have their strengths and weaknesses. The best method depends on your particular situation and needs.

A First Crack at HTML5/JavaScript Game Development

For a while now I’ve been wanting to start writing JavaScript/HTML5 based games for the web.  I’ve always been drawn to simple games like the old Asteroids and Galaga where you have a ship and a bunch of bad stuff trying to destroy you. They’re easy to learn, play and waste your time with and yes, they give you that easy, addictive sense that you are actually accomplishing something worthwhile when you break your high score by one point.

One of the coolest modern versions of these old arcade classics is actually kind of useful. Ztype  is a simple shooter game along the lines of Galaga but you are shooting words and you have to type them in correctly or your ship doesn’t shoot. It’s an addictive game with great graphics and sounds.

ZTYPE

 

But I needed something simpler to get me started and familiar with the game engine I had chosen to start with – Phaser. I found an amazing tutorial on how to re-build Asteroids over on zekechan.net. It is surprisingly straightforward, provides full code to check yours against, is easy to follow but goes in depth enough to take you through developing an entire game.

Asteroids screenshot

 

However, by the time I was finished building the game, I was a little bored with the idea of a ship trying to destroy asteroids so I switched it up a bit to include a political theme appropriate for the current presidential race. You can check it out at http://ryanrandom.com/ted .

Ted Cruz vs Donald Trump

 

Keeping My House From Burning

Late last year my family and I moved to a two story, 1970’s era house on 1 1/2 acres of land. Since then I’ve been thinking about retrofitting the house with Ethernet cable to most rooms. This is just one of many projects I have planned to update the house and add a few “smart home” features to make management of it and the property easier.

But before I can move on to the fun stuff (or the Ethernet cabling which I don’t consider much fun) there are a few projects that have to be finished out of necessity. I undertook one of them last weekend in the hopes of preventing the house from burning down. Let me explain.

A couple of months ago we finished up a project converting the garage into livable space. In the process of the renovation we discovered that the existing 30 amp dryer circuit was wired using aluminum wire. This wasn’t necessarily a bad thing since the wire was sized right but our building inspector wanted to see a four prong dryer outlet which meant running a new four wire cable.

So my electrician pulled the breaker and put in a new circuit. That problem was solved but now, all of a sudden, my irrigation pump wasn’t getting power. We started poking around and discovered that someone, years ago, had simply continued the dryer circuit from the laundry room, through the wall and out to the pump.

This was a major problem for two reasons:

1. The spliced-in wire going to the pump was only 12 gauge which is only big enough for a 20 amp circuit.
2. The splice combined copper and aluminum wires under the same screw.

The smaller gauge wire basically caused it to act as a giant fuse that, technically, could have burned up. Almost all of it was in conduit, however, so it probably wouldn’t have caused much damage. But it was still a problem.

The bigger issue to me was the mixing of copper and aluminum wire. That combination can lead to a chemical reaction called electrolysis which can cause oxidation, which increases the resistance of the wire dramatically leading to excessive heat build-up and potentially causing a fire.

None of this was of immediate concern since the circuit was dead and the dryer was on its own now. But I needed my irrigation pump back on so I could:

1. Keep my landscape from dying.
2. Put in an automatic irrigation system and potentially add wifi connectivity so I can turn my sprinklers on from Fiji – just in case I need to.

Thankfully my irrigation pump only needed a 20 amp circuit so the 12 gauge wire could stay. I also wanted to reuse the existing aluminum wire since it wouldn’t cost anything to keep it and it was already run through the house. All I really needed to do was take care of the aluminum to copper problem. There is a special connector called Alumicon but my aluminum wiring was too big (8 gauge) for it.

I opted for tin-plated aluminum splicing blocks which are approved for both copper and aluminum wire. The copper wire goes in one side of the block and gets screwed down while the aluminum wire goes in the other side and gets secured. The two metals never touch! Wrapped with some rubber splicing tape they actually look pretty good.

Now I can water my grass, not worry about my house burning down (at least for one reason) and start thinking about my next, hopefully more fun, project. Not a bad weekend.

Please Update Old ESRI JavaScript API Samples

I’m confused – why does ESRI insist on keeping JavaScript API samples written in a legacy (non-AMD) module require style? I can understand keeping the legacy code in the API reference since a lot of developers probably wrote a lot of code using it. But current sample code should reflect current programming styles. And yes, the samples I’m talking about are the current ones.

ESRI Javascript API screenshot

Somebody actually goes in and updates the CDN reference to the current JavaScript API. Would it be that difficult to convert the requires to an AMD wrapper and change a few module references? Just a thought.

Mapping The Dead

Skull and Crossbones

I’ve seen a lot of interesting mapping applications in the news over the last year. One that’s caught my attention is cemetery mapping. I had never really thought about just how ideal a cemetery is to be mapped. Each plot has a distinct spatial location. They have measurable attributes like occupant, location, depth and width. They are often laid out like a grid or a table but sometimes (especially on older properties) they are spread out seemingly without much thought to being easily located again.

Locating a plot is obviously the most important attribute for cemetery mapping. Caretakers have to be able to determine where a body is located so they can avoid accidentally digging it up when placing another body. Relatives of the deceased want to know where their family members are too so they and those in the future can find them again.

One of the first articles I cam across last year was about the cemeteries in the City of Mackinac Island Michigan. The city’s cemetery committee (I bet those meetings are fun) recognized that its current data holdings (hand-drawn paper maps, incomplete lists of cemetery residents and the memories of senior committee members who are increasingly ending up in the cemetery themselves) were not adequate. So they started mapping out plots using GPS and building a database of names.

The City of Mackinac Island Cemetery Committee hopes to have a completed digital mapping system by next June, which will help the city clerk’s office keep track of plots and burials more efficiently. The map is one of many updates the city is considering relating to its cemeteries and burial policies.

It didn’t surprise me to find that some cities are using GIS technology to keep track of cemeteries. What did surprise me was the number of software packages that have been created for mapping and managing them. A quick search for cemetery mapping software reveals several pages of apps, services and companies with interesting names like Memorial Business systems, CemMapper and The Crypt Keeper.

Yet with all of these software solutions, none of the cemeteries that I was interested in searching had detailed mapping of their plots. Only one even had a website. Although the mapping technology is there, this kind of project doesn’t seem like one many cemeteries are willing to undertake.

Why Gulp is Great

In my last post I talked about why I started and then stopped using Grunt. Basically, Grunt seemed too slow and my workflow was being halted too often while I waited for it to build. There are several other task running/app building tools out there (Broccoli, Cake, Jake…) but I decided to try Gulp first since it has a large user base and there are plenty of plugins out there to keep me from having to think too much.

At first, Gulp didn’t seem quite as straightforward as Grunt. Grunt was easy to use. You just had to write (sometimes lengthy) configuration objects for the plugins you wanted to run and then fire off the tasks using the command window. Even someone like me could figure out how to add a source file and a destination location to a minification plugin and be reasonably sure I would get a minified file out of it.

It was also very easy to visualize what your Gruntfile was doing because every task plugin worked independently of the rest. You would configure ten different tasks and then register them all together in a row and expect them to run one after another until they all completed.

With Gulp, you don’t just configure plugins, you write JavaScript code to define your tasks and how you want them run. A Gulp task asks you to require the plugins you want to use, or write a custom task using plain old JavaScript, then call Gulp.src to provide a source file for the tasks to run on. Doing this opens a Node stream which keeps your source object in memory. If you want to run one of the task plugins you required at the top of your script, you simply pass the in-memory object to it by calling the .pipe() method. You can continue piping the object from one task to another until you’re finished. Finally, you call Gulp.dest and provide a destination location.

var gulp = require('gulp');
var plumber = require('gulp-plumber');
var addsrc = require('gulp-add-src');
var less = require('gulp-less');
var cssnano = require('gulp-cssnano');
var concatCss = require('gulp-concat-css');
var rename = require('gulp-rename');
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');
var watch = require('gulp-watch');

gulp.task('less', function(){
    return gulp.src('./source/style/style.less')
        .pipe(plumber())
        .pipe(less())
        .pipe(cssnano())
        .pipe(addsrc.append(['./source/style/anotherStyleSheet.min.css', './source/style/stillAnother        StyleSheet.min.css']))
        .pipe(concatCss('concat.css'))
        .pipe(rename("style.min.css"))
        .pipe(gulp.dest('./destination/style/'));
});

gulp.task('js', function(){
    return gulp.src(['./source/scripts/javaScript.js'])
        .pipe(plumber())
        .pipe(uglify({
            mangle: false,
        }))
        .pipe(addsrc.prepend(['source/scripts/someJSLibrary.min.js', 
        'source/scripts/anotherJSFile.min.js','source/scripts/stillAnotherJSFile.min.js']))
        .pipe(concat("all.js"))
        .pipe(rename("finalFile.min.js"))
        .pipe(gulp.dest('./destination/scripts/'));
});

gulp.task('default', ['less', 'js'] , function() {
gulp.watch(['./source/style/style.less']);
gulp.watch(['./source/scripts/javaScript.js']);
});

The great thing about using Node streams is that you don’t have to keep opening and closing files for each task like in Grunt. This lack of i/o overhead makes running a series of tasks very fast. Even so, you really need to use the built-in watch task to take advantage of this speed. In my experience, running a default task with four or five tasks in it, from the command line, was almost as slow as in Grunt. With the watch task running, it only took milliseconds to rebuild what it needed to. But I’m new to Gulp so what do I know?

You can see in the code above that I used several plugins to manipulate the input file as it is piped down the stream. There are two that I found particularly helpful. The first is Gulp-Plumber which is basically a patch that keeps streams from being un-piped when an error is encountered. Supposedly, streams breaking on error will be fixed in version 4.0.

The second helpful plugin here is Gulp-Add-Src which does exactly what the title says. You can add additional source files to your stream so you can do neat things like concatenation. With these and other plugins I haven’t found anything with Gulp that would keep me from doing everything I could with Grunt.

The only thing I really don’t like about Gulp is the icon. It’s a cup with a straw in it and the word Gulp across its side. A cup by itself indicates an ability to gulp what is in it. But you don’t gulp through a straw, you sip or suck. Who wants their product to suck? And sip indicates a lack of passion. So what’s with the straw?

Gulp.js cup icon