How to Change a Github Repository Language

I created a new Github repository today for a Node/Express project at work. After pushing the project code I went to Github and saw that the language for the project was listed as CSS. To be fair to Github, I did style my app with CSS. But as it’s a Node app, I expected to see the JavaScript tag instead.

It turned out the third-party image gallery library I was using had much larger files than anything I was writing. Github’s Linguist library picked up on the larger files and used those to extrapolate CSS as the dominant technology in the app. I still don’t entirely understand why, since the library’s JavaScript files were three times the size of its CSS files.

Now I needed a way to change what the language tag said. Unfortunately, Github doesn’t give you a good way to do this. The Linguist library does give you options to ignore files from third parties though. Here’s how you do it:

  1. Create a .gitattributes file at the root of your local repository.
  2. Inside the .gitattributes file, type a path to the containing folder holding your third-party code. At the end of the path type “/*”. 
  3. After the path type “linguist-vendored”. Here is the example from the Linguist troubleshooting section: 
    special-vendored-path/* linguist-vendored

    Save your file, commit it and push it to your remote Github repository.

This takes the third-party code out of consideration for the Linguist algorithm. Once you refresh your Github page the language tag should be different. If the language still doesn’t match what you think it should, try adding the “linguist-vendored” tag to other folders to reduce the types of files Linguist searches.

Use Yarn in Place of npm

Condensed version of This Post

Use Yarn in place of npm: Workflows don’t change; Packages load faster; Consistent node_module structure.

yarn init = npm init
yarn install = npm install
yarn add [package] = npm install [package] --save
yarn add [package] --dev = npm install [package] --save-dev
yarn remove [package] = npm uninstall [package]

 

Longer Version of This Post

npm is currently king of the Node package managers. Yarn is an alternative package manager that tries to fix what could be problems for some npm users. Yarn provides faster load times, dependency consistency and shorter commands, all within the same workflow you are used to with npm.

 

Installation and Use

If you already use npm, install Yarn with npm install yarn -g . That’s it! You can now use the yarn commands just like you would with npm. If you feel silly installing npm’s replacement with npm you can download an installer instead. Use your existing package.json file or create a new one with yarn init . Run yarn add [package]  to install new package dependencies . Removing installed packages is as easy as yarn remove [package] . Install all of the dependencies of an existing project using yarn install,or even just yarn .

 

Deterministic Package Installs

Deterministic Package Installs is a fancy way of saying: The same module dependencies will be installed with the same structure on any machine using yarn. The structure of dependencies in the node_modules directory can be different from machine to machine when using npm. This can potentially cause a dependency to work on one machine but break on another.

Speed

Yarn installs packages faster than npm. Yarn starts by comparing a dependency against what’s already in the global yarn cache. If there’s no package cache, the package is downloaded and placed in cache. Once all dependencies are cached, yarn copies all necessary files only once to the project’s node_modules directory.

 

Downloaded and cached packages don’t need to be re-downloaded in the future. If you nuke your node_modules folder and run yarn install  again, your dependencies will be copied from the cache into your new node_modules directory very quickly. If you start a new project somewhere else on the same machine, only dependencies that have never been used elsewhere are downloaded. The rest are pulled from the cache and merged with the downloaded ones. This makes for a very fast load.

Conclusion

Do you really need to use Yarn? Of course not. Lot’s of people use npm for their projects with little problem. But on projects where dependencies have to be installed separately among several users, module consistency could become a problem. Yarn solves this and provides other great enhancements to npm. Yarn provides a similar use experience to npm. It provides all the same packages, is faster and has simpler commands. It can even tell you why a package is being used. There are few if any downsides and you can always go back to npm.

5 Ways to Comment Your JSON

Comments aren’t part of the official JSON specification. According to an old (2012) Google Plus post by Douglas Crockford, he removed them to preserve interoperability.  But that same post suggests you can still use comments  so long as you remove them through minification before parsing.

There are a few other ways to handle JSON comments besides minification:

  1. You can add a new data element to your object. The element key would be named  “_comment_” or something similar. The value would be the actual comment. This method is slightly intriguing but feels kind of dirty. It looks like a hack. It is a hack! bulk is also added to the network payload. JSON is a lightweight data exchange. Adding comment elements takes away from this.
  2. Use Scripts that programmatically remove comments from your JSON before it’s parsed. Sindre Sorhus published a comment stripping module which does just that. This is similar to Crockord’s method of minification in that it removes the comments before parsing but you can inline it in your code rather than use it during a build process.
  3. You can forget comments in your JSON entirely. Put comments in the code where you make the data request in the first place. You should already know what kind of returned data is expected so comments would make sense here. You can stay in your code without the need to view a separate file. This makes your code easier to understand.
  4. Finally, if JSON is being used as a configuration file or some other static data store, you might even try commenting it in a separate file. Put a text README in the same directory the configuration file is stored. The README could contain a paragraph describing the data or you could copy the JSON into the README and use inline comments.

There are several ways to take care of the problem of commenting JSON files. All have their strengths and weaknesses. The best method depends on your particular situation and needs.

A First Crack at HTML5/JavaScript Game Development

For a while now I’ve been wanting to start writing JavaScript/HTML5 based games for the web.  I’ve always been drawn to simple games like the old Asteroids and Galaga where you have a ship and a bunch of bad stuff trying to destroy you. They’re easy to learn, play and waste your time with and yes, they give you that easy, addictive sense that you are actually accomplishing something worthwhile when you break your high score by one point.

One of the coolest modern versions of these old arcade classics is actually kind of useful. Ztype  is a simple shooter game along the lines of Galaga but you are shooting words and you have to type them in correctly or your ship doesn’t shoot. It’s an addictive game with great graphics and sounds.

ZTYPE

 

But I needed something simpler to get me started and familiar with the game engine I had chosen to start with – Phaser. I found an amazing tutorial on how to re-build Asteroids over on zekechan.net. It is surprisingly straightforward, provides full code to check yours against, is easy to follow but goes in depth enough to take you through developing an entire game.

Asteroids screenshot

 

However, by the time I was finished building the game, I was a little bored with the idea of a ship trying to destroy asteroids so I switched it up a bit to include a political theme appropriate for the current presidential race. You can check it out at http://ryanrandom.com/ted .

Ted Cruz vs Donald Trump

 

Keeping My House From Burning

Late last year my family and I moved to a two story, 1970’s era house on 1 1/2 acres of land. Since then I’ve been thinking about retrofitting the house with Ethernet cable to most rooms. This is just one of many projects I have planned to update the house and add a few “smart home” features to make management of it and the property easier.

But before I can move on to the fun stuff (or the Ethernet cabling which I don’t consider much fun) there are a few projects that have to be finished out of necessity. I undertook one of them last weekend in the hopes of preventing the house from burning down. Let me explain.

A couple of months ago we finished up a project converting the garage into livable space. In the process of the renovation we discovered that the existing 30 amp dryer circuit was wired using aluminum wire. This wasn’t necessarily a bad thing since the wire was sized right but our building inspector wanted to see a four prong dryer outlet which meant running a new four wire cable.

So my electrician pulled the breaker and put in a new circuit. That problem was solved but now, all of a sudden, my irrigation pump wasn’t getting power. We started poking around and discovered that someone, years ago, had simply continued the dryer circuit from the laundry room, through the wall and out to the pump.

This was a major problem for two reasons:

1. The spliced-in wire going to the pump was only 12 gauge which is only big enough for a 20 amp circuit.
2. The splice combined copper and aluminum wires under the same screw.

The smaller gauge wire basically caused it to act as a giant fuse that, technically, could have burned up. Almost all of it was in conduit, however, so it probably wouldn’t have caused much damage. But it was still a problem.

The bigger issue to me was the mixing of copper and aluminum wire. That combination can lead to a chemical reaction called electrolysis which can cause oxidation, which increases the resistance of the wire dramatically leading to excessive heat build-up and potentially causing a fire.

None of this was of immediate concern since the circuit was dead and the dryer was on its own now. But I needed my irrigation pump back on so I could:

1. Keep my landscape from dying.
2. Put in an automatic irrigation system and potentially add wifi connectivity so I can turn my sprinklers on from Fiji – just in case I need to.

Thankfully my irrigation pump only needed a 20 amp circuit so the 12 gauge wire could stay. I also wanted to reuse the existing aluminum wire since it wouldn’t cost anything to keep it and it was already run through the house. All I really needed to do was take care of the aluminum to copper problem. There is a special connector called Alumicon but my aluminum wiring was too big (8 gauge) for it.

I opted for tin-plated aluminum splicing blocks which are approved for both copper and aluminum wire. The copper wire goes in one side of the block and gets screwed down while the aluminum wire goes in the other side and gets secured. The two metals never touch! Wrapped with some rubber splicing tape they actually look pretty good.

Now I can water my grass, not worry about my house burning down (at least for one reason) and start thinking about my next, hopefully more fun, project. Not a bad weekend.

Please Update Old ESRI JavaScript API Samples

I’m confused – why does ESRI insist on keeping JavaScript API samples written in a legacy (non-AMD) module require style? I can understand keeping the legacy code in the API reference since a lot of developers probably wrote a lot of code using it. But current sample code should reflect current programming styles. And yes, the samples I’m talking about are the current ones.

ESRI Javascript API screenshot

Somebody actually goes in and updates the CDN reference to the current JavaScript API. Would it be that difficult to convert the requires to an AMD wrapper and change a few module references? Just a thought.

Mapping The Dead

Skull and Crossbones

I’ve seen a lot of interesting mapping applications in the news over the last year. One that’s caught my attention is cemetery mapping. I had never really thought about just how ideal a cemetery is to be mapped. Each plot has a distinct spatial location. They have measurable attributes like occupant, location, depth and width. They are often laid out like a grid or a table but sometimes (especially on older properties) they are spread out seemingly without much thought to being easily located again.

Locating a plot is obviously the most important attribute for cemetery mapping. Caretakers have to be able to determine where a body is located so they can avoid accidentally digging it up when placing another body. Relatives of the deceased want to know where their family members are too so they and those in the future can find them again.

One of the first articles I cam across last year was about the cemeteries in the City of Mackinac Island Michigan. The city’s cemetery committee (I bet those meetings are fun) recognized that its current data holdings (hand-drawn paper maps, incomplete lists of cemetery residents and the memories of senior committee members who are increasingly ending up in the cemetery themselves) were not adequate. So they started mapping out plots using GPS and building a database of names.

The City of Mackinac Island Cemetery Committee hopes to have a completed digital mapping system by next June, which will help the city clerk’s office keep track of plots and burials more efficiently. The map is one of many updates the city is considering relating to its cemeteries and burial policies.

It didn’t surprise me to find that some cities are using GIS technology to keep track of cemeteries. What did surprise me was the number of software packages that have been created for mapping and managing them. A quick search for cemetery mapping software reveals several pages of apps, services and companies with interesting names like Memorial Business systems, CemMapper and The Crypt Keeper.

Yet with all of these software solutions, none of the cemeteries that I was interested in searching had detailed mapping of their plots. Only one even had a website. Although the mapping technology is there, this kind of project doesn’t seem like one many cemeteries are willing to undertake.

Why Gulp is Great

In my last post I talked about why I started and then stopped using Grunt. Basically, Grunt seemed too slow and my workflow was being halted too often while I waited for it to build. There are several other task running/app building tools out there (Broccoli, Cake, Jake…) but I decided to try Gulp first since it has a large user base and there are plenty of plugins out there to keep me from having to think too much.

At first, Gulp didn’t seem quite as straightforward as Grunt. Grunt was easy to use. You just had to write (sometimes lengthy) configuration objects for the plugins you wanted to run and then fire off the tasks using the command window. Even someone like me could figure out how to add a source file and a destination location to a minification plugin and be reasonably sure I would get a minified file out of it.

It was also very easy to visualize what your Gruntfile was doing because every task plugin worked independently of the rest. You would configure ten different tasks and then register them all together in a row and expect them to run one after another until they all completed.

With Gulp, you don’t just configure plugins, you write JavaScript code to define your tasks and how you want them run. A Gulp task asks you to require the plugins you want to use, or write a custom task using plain old JavaScript, then call Gulp.src to provide a source file for the tasks to run on. Doing this opens a Node stream which keeps your source object in memory. If you want to run one of the task plugins you required at the top of your script, you simply pass the in-memory object to it by calling the .pipe() method. You can continue piping the object from one task to another until you’re finished. Finally, you call Gulp.dest and provide a destination location.

var gulp = require('gulp');
var plumber = require('gulp-plumber');
var addsrc = require('gulp-add-src');
var less = require('gulp-less');
var cssnano = require('gulp-cssnano');
var concatCss = require('gulp-concat-css');
var rename = require('gulp-rename');
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');
var watch = require('gulp-watch');

gulp.task('less', function(){
    return gulp.src('./source/style/style.less')
        .pipe(plumber())
        .pipe(less())
        .pipe(cssnano())
        .pipe(addsrc.append(['./source/style/anotherStyleSheet.min.css', './source/style/stillAnother        StyleSheet.min.css']))
        .pipe(concatCss('concat.css'))
        .pipe(rename("style.min.css"))
        .pipe(gulp.dest('./destination/style/'));
});

gulp.task('js', function(){
    return gulp.src(['./source/scripts/javaScript.js'])
        .pipe(plumber())
        .pipe(uglify({
            mangle: false,
        }))
        .pipe(addsrc.prepend(['source/scripts/someJSLibrary.min.js', 
        'source/scripts/anotherJSFile.min.js','source/scripts/stillAnotherJSFile.min.js']))
        .pipe(concat("all.js"))
        .pipe(rename("finalFile.min.js"))
        .pipe(gulp.dest('./destination/scripts/'));
});

gulp.task('default', ['less', 'js'] , function() {
gulp.watch(['./source/style/style.less']);
gulp.watch(['./source/scripts/javaScript.js']);
});

The great thing about using Node streams is that you don’t have to keep opening and closing files for each task like in Grunt. This lack of i/o overhead makes running a series of tasks very fast. Even so, you really need to use the built-in watch task to take advantage of this speed. In my experience, running a default task with four or five tasks in it, from the command line, was almost as slow as in Grunt. With the watch task running, it only took milliseconds to rebuild what it needed to. But I’m new to Gulp so what do I know?

You can see in the code above that I used several plugins to manipulate the input file as it is piped down the stream. There are two that I found particularly helpful. The first is Gulp-Plumber which is basically a patch that keeps streams from being un-piped when an error is encountered. Supposedly, streams breaking on error will be fixed in version 4.0.

The second helpful plugin here is Gulp-Add-Src which does exactly what the title says. You can add additional source files to your stream so you can do neat things like concatenation. With these and other plugins I haven’t found anything with Gulp that would keep me from doing everything I could with Grunt.

The only thing I really don’t like about Gulp is the icon. It’s a cup with a straw in it and the word Gulp across its side. A cup by itself indicates an ability to gulp what is in it. But you don’t gulp through a straw, you sip or suck. Who wants their product to suck? And sip indicates a lack of passion. So what’s with the straw?

Gulp.js cup icon

Why Grunt is Gone From My Build Team Lineup

I have to admit, I don’t always research every option when I’m looking for a solution to a problem. I’ll usually start out with a broad web search to see what others are using and if their solutions seem to fit my situation. Then I’ll take maybe the top two solutions and try to implement them. The first one that serves all of my requirements and is relatively easy to implement usually becomes my solution.This is exactly how I came to start utilizing Grunt as a build tool for a large web mapping application I develop.

When I first started using Grunt, the JavaScript API development team at ESRI was using it for their projects. Lots of other developers were using it too and I didn’t know any better than to follow. A few people were talking about Gulp too as an alternative to Grunt. So I took a brief look at Gulp, didn’t immediately understand it, then started putting together my Grunt configuration file and collecting all the plugins I needed.

What can I say – it worked great and I was happy that I wasn’t still using code minifiers and copying files by hand to production folders. When I started using Adobe Brackets as my default code editor, I was pleased to find it had a great Grunt plugin to integrate task running directly.

Things were great for a while but I was always bothered by how long Grunt took to run through all my tasks and complete my build. It would take 10+ seconds to finish and I would have to sit there waiting to check my latest edits. It can be really hard to develop a piece of code when you are constantly halting your flow.

However, I was lazy and didn’t want to have to learn another tool. What I had in place worked,  just not efficiently. But eventually I knew something had to change. Strangely, it wasn’t inefficiencies with Grunt that made me dump it, it was Brackets. My Brackets install was slowing down and freezing at inopportune times, like whenever I wanted to use it. I was also getting the Brackets “white screen of death” from time-to-time which required the Task Manager just to shut the program down. So now I was waiting 30 seconds for my editor to unfreeze so I could wait 10 seconds for my task runner to finish.

The upshot is, I revisited Atom and am now using it as my default editor. Fortunately, I wasn’t happy with Atom’s Grunt integration. I figured it was a great time to jump ship and try again with the second biggest player in the JavaScript task running world: Gulp.

In my next couple of posts I’ll talk more about why Gulp is great and why I shouldn’t have nuked Atom when I was first choosing a new editor.

Building a Stubborn Driver: An Ubuntu Adventure

I was confused, frustrated and defeated. My back was on fire and I could barely feel my legs. If there had been a bed within range I would have crawled in, closed my eyes and tried to forget the last five hours I had sat trying to build Linux drivers for my son’s new USB wireless adapter. It didn’t work. Nothing was working!

Wireless Adapter
Now I’m no Linux expert but I can usually figure out how to make the OS do what I need it to do. In this case it should have been simple – just make and install the source files and maybe change some setting on another file. But things went wrong from the beginning.
It was Saturday and I anticipated getting the project done fairly quickly. I had actually tried to get the WiFi working the day before but I was trying to do it without a wired connection to help out. I had just installed Ubuntu 14.04.03 and I really didn’t think I would have any trouble.

The open source drivers that come with Ubuntu take care of most of the hardware I want to use. But this particular wireless adapter was a plug-in USB with a proprietary driver that had to be compiled by hand. The adapter came with one of those mini-cds. It had three folders with drivers for Linux, Windows and Macs.

The windows and mac folders had exactly one file that you can click on to load the driver. I’ve used this adapter on a windows box and it works really well. All you have to do to get it working is double-click the executable and away you go. On Linux you have to build the driver from source code. So we’ve gone from one .exe file to about 450 files that you have to figure out how to put together and get to work. OK, but this is Linux. That’s what you expect from an open source OS.

But even building drivers shouldn’t be that difficult if you know basic Linux commands, how to traverse directories, edit files and use Make. Still, with such a large user and contributor base (for Ubuntu) you would think someone would have made the process for this driver a little clearer. Here are the build instructions that came with the adapter:

Build Instructions:  
====================

1> $tar -xvzf DPB_RT2870_Linux_STA_x.x.x.x.tgz
    go to "./DPB_RT2870_Linux_STA_x.x.x.x" directory.
    
2> In Makefile
	 set the "MODE = STA" in Makefile and chose the TARGET to Linux by set "TARGET = LINUX"
	 define the linux kernel source include file path LINUX_SRC
	 modify to meet your need.

3> In os/linux/config.mk 
	define the GCC and LD of the target machine
	define the compiler flags CFLAGS
	modify to meet your need.
	** Build for being controlled by NetworkManager or wpa_supplicant wext functions
	   Please set 'HAS_WPA_SUPPLICANT=y' and 'HAS_NATIVE_WPA_SUPPLICANT_SUPPORT=y'.
	   => #>cd wpa_supplicant-x.x
	   => #>./wpa_supplicant -Dwext -ira0 -c wpa_supplicant.conf -d
	** Build for being controlled by WpaSupplicant with Ralink Driver
	   Please set 'HAS_WPA_SUPPLICANT=y' and 'HAS_NATIVE_WPA_SUPPLICANT_SUPPORT=n'.
	   => #>cd wpa_supplicant-0.5.7
	   => #>./wpa_supplicant -Dralink -ira0 -c wpa_supplicant.conf -d

4> $make
	# compile driver source code
	# To fix "error: too few arguments to function ¡¥iwe_stream_add_event"
	  => $patch -i os/linux/sta_ioctl.c.patch os/linux/sta_ioctl.c

5> $cp RT2870STA.dat  /etc/Wireless/RT2870STA/RT2870STA.dat
    
6> load driver, go to "os/linux/" directory.
    #[kernel 2.4]
    #    $/sbin/insmod rt2870sta.o
    #    $/sbin/ifconfig ra0 inet YOUR_IP up
        
    #[kernel 2.6]
    #    $/sbin/insmod rt2870sta.ko
    #    $/sbin/ifconfig ra0 inet YOUR_IP up

7> unload driver    
    $/sbin/ifconfig ra0 down
	$/sbin/rmmod rt2870sta

I started by simply trying to make the driver according to the instructions above. But the process kept showing an error and acting like it couldn’t find certain files that should have either been included or created when the make command was run. So I went searching “How to compile RT2870STA driver”. There were a lot of sites giving basic instructions about how to build the driver and it seemed like it should work fine. What I didn’t notice were the dates of most of these articles. They were pre 2010.

I finally came across a newer post that explained this driver was built with an earlier Linux kernel in mind (2.x). In the 3.x kernel, some functions that are referenced by my driver source files were re-named! Then I finally did what I should have done from the very beginning: I got specific. I searched “How to compile RT2870STA on Linux 3.19 kernel”. This seemed like a god idea at the time. In fact, it yielded a great blog post that provided a patch that supposedly would fix the discrepancy in the driver files. But for the life of me, I couldn’t even get that to run.
It was at that point I became too frustrated and defeated to continue. My entire day had been wasted. My kids were complaining that they hadn’t seen me all day. My wife was giving me that concerned “Oh dear, he’s trying to do smart people things again” look. Even my dog seemed annoyed at me that I had spent more time typing “make install” than I had spent throwing her ball.

So I gave up and took a few days away from my little project and did some other tasks that were just slightly easier for me like taking out the garbage. Then, Saturday morning I thought why not give it one more try? So I searched “How to compile RT2870STA on Ubuntu 14.04.03”. It was like magic! The very first result was a post on ubuntuforums.org that explained everything in a few simple steps. It seems there were two functions that had been renamed in newer builds of Ubuntu. I had to edit a file in one of the driver folders and change the names of a couple functions. I then ran Make again and voila, my adapter was up and running.

Looking back I realize I’ve learned (and re-learned) a lot about working with Linux. I now have much better terminal skills. I understand driver compilation processes better and how they interact with the kernel. I also reinforced my belief that an Ethernet connection to the internet is always superior to wireless, although inconvenient.