January 18, 2016 brianradio2016

I’ve been spending quite a bit of time on embedded devices recently, as both a user and a designer. It’s not my first rodeo — I’ve designed and built a variety of embedded systems over the years — and it certainly provides stark comparisons when measured against standard computing practices and design.

Just as other computing technology has evolved greatly in the past few decades, embedded systems have grown as well, but their trajectories are quite different.

The term “embedded device” itself has some wash these days. A touchscreen remote could be considered an embedded device, as could any number of IoT widgets, all the way up to smartphones and a quad-core ARM box running Linux or Android that can display 4K video. The truth is that some embedded systems today are comparable in horsepower to desktop systems from not that long ago.

One thing that these devices share is their fragility. If an embedded device loses power during a firmware upgrade, it’s probably bricked. Similarly, a bug in the firmware of sufficient severity can render the device permanently useless. The flash used in these devices is permanent, and if that flash develops problems, the device is toast. Further, the nature of embedded designs is such that updates must occur remotely, either through a phone-home process or through a manual trigger, which means little to no room for error anywhere. Embedded systems are a finicky business.

January 11, 2016 brianradio2016

As we continue the march of Webifying everything, of BYOD becoming a pervasive part of professional life, and of more user tracking from websites at every level, we’re currently ill-equipped to manage professional and personal Internet use. Most people don’t know the difference and blithely combine the two. This is how so many professional email addresses were found in the Ashley Madison hack and how so many people wind up on the wrong end of the stick when they might access certain NSFW sites and become subject to corporate security measures — even when the access is from their personal laptop.

I run three or four different Web browsers simultaneously at any given time. Usually it’s Safari, Chrome, and Firefox. I tolerate the resource drain caused by this method for a few reasons, but the primary reason is task and usage separation.

I reserve one browser for personal use. This is where I’ll do my casual browsing, news, aggregators, shopping, and whatnot. Another browser is earmarked for work. This is used for all the Web apps I use on a daily basis, everything from GitHub to various hosted service control panels, WebEx, custom apps, and so on. The work browser generally cannot leverage privacy tools such as Ghostery and NoScript because they interfere with normal app functions. Finally, I run a dedicated browser exclusively for social media because I firewall trackers as much as possible, and there are many sites that are nonfunctional on purpose if you’re using blockers and privacy tools. Thus, those sites get to watch me use their site and track me nowhere else.

While mine is not perhaps a normal use case, it’s not terribly far-fetched either. I’m sure there are many people who at least separate work and personal Web use by using Chrome for one and Firefox for another. This is a resource drain, and not altogether necessary. Some folks try to use Chrome’s People feature, which is somewhat close in concept but functionally horrible for this purpose. Firefox offers profiles too, and you can create custom launchers to invoke multiple separate instances of Firefox with different profiles, but that’s far less than ideal.

January 4, 2016 brianradio2016

After a tumultuous 2015 in tech, 2016 promises to be much the same. The past year opened with wrangling over Net neutrality, and it closed with wrangling over encryption backdoors. There were some highs: The FCC voted to protect the Internet from the big ISPs, and the would-be disaster of a Comcast-Time Warner Cable merger was averted.

And there were many lows: the massive Office of Personnel Management data breach, the Ashley Madison hack, Lenovo and Dell putting users at risk, the slippery return of CISA and unfettered government surveillance….

It was a year of triumphs and tumbles. Don’t expect 2016 to be any less contentious and eventful. Here are five things we might expect to see in the next 12 months.

Tech tremor No. 1: As big ISPs ramp up their bullying, more pockets of responsible Internet service will appear

In the wake of the FCC’s strict Net neutrality regulations in 2015, the big ISPs seem hell bent on proving to the world that they are irresponsible monopolies. They will double down on the shenanigans in 2016, possibly in an attempt to squeeze every last cent out of their captive customer base before the inevitable happens and they are either forced to compete for customers, or they become regulated utilities. Expect more news about data caps, poor customer service, predatory rate changes, mythic service offerings, and the rest of the panoply of borderline-legal games they have become synonymous with.

December 21, 2015 brianradio2016

This past year in technology has been quite the roller coaster. Of course, you could say the same for most years in the past few decades, but this year went above and beyond in many ways, most of them having little to do with technology itself, and more with its impact on modern civilization and politics. A number of watershed events this year will ripple far into the future.

It’s a challenge to pinpoint the most significant technology-related event of 2015, but in the United States, it may be the FCC vote upholding strict Net neutrality regulations. The actions and words of FCC chairman Tom Wheeler came as a surprise to many who had written him off as a cable industry puppet, but in reality, he was up to the challenge. I can’t say that my faith in him was absolute, but I did call it a year before.

It’s not possible to overstate the importance of this decision to the future of the Internet and the future of the U.S. and world economies. Disallowing corporations to control and constrain access to Internet resources based on business deals and incentives means that the Internet can continue to grow and provide the mechanism by which myriad new resources and technologies will be developed and delivered.

Historians will look at this event in 2015 as a mountainous step forward. We can only hope that we can keep to that path and complete the process of divorcing content providers from Internet service providers, and drag our Internet infrastructure into this century though competition if possible, and regulation if not. There’s no other alternative.

December 14, 2015 brianradio2016

Last week, Let’s Encrypt came out of beta. Let’s Encrypt is a collaborative effort that provides free SSL/TLS certificates for use by anyone with a valid Internet domain. It’s also a trusted certificate authority, and it’s currently issuing 90-day certificates free of charge. The upside is free SSL/TLS certificates. The downside is that 90-day expiration, though there are methods to renew the certificates automatically as the expiration approaches.

Further, the tools provided by Let’s Encrypt make it pretty much effortless to implement. The Let’s Encrypt Python tool available at GitHub runs on a Web server, requests a valid certificate, and even does the Apache configuration for you, all with a pretty ncurses UI. Basically, you run this on a host with a bunch of non-SSL domains, and when it’s done, they’re all secured with free valid certificates.

Automated support for other Web servers such as Nginx is in the works, but the tools also function as a CLI, meaning you can easily integrate this into any Web service manually, and run those commands on a routine basis via anacron to ensure that you get a new certificate before the existing certificate expires.

This 90-day expiration is quite short and frankly a bit of a pain, but for a first launch it is a reasonable balance. If nefarious plans are made with some of these certs, the damage will be limited to only a few months and the certificates will not be renewed. Perhaps as time wears on, trusted clients will be granted longer expiration periods. This is a bit of an experiment, after all — and yet Let’s Encrypt has already distributed more than 100,000 certificates.

December 7, 2015 brianradio2016

One thing you can say for traditional broadcast media: They scale really well. If you put an analog signal on the air or on a wire with enough repeaters and amplifiers, it will serve every client that connects. That’s not the case with most of the network world, unfortunately. Sure we have multicast, but that’s not on an Internet scale — and the Internet is where the problems lie.

First, let’s define multicast as used in IP networks. This is a method by which a single source stream can be accessed by multiple clients simultaneously, without increasing the load on the source itself. Thus, this functions much like an analog broadcast: You have a single source that a client can connect to at any time. The downside is that the client is a silent subscriber of the content and cannot control the stream; there’s no rewinding or restarting on a per-client basis. This is content broadcast over IP, and it’s what television networks use to distribute video streams through their networks, financial institutions to receive stock quotes, and so forth.

On the other hand, the world is rapidly moving to a demand model — the younger generations are already there — where streaming content is controlled by the client, and the client forms a one-to-one connection to the content source, versus multicast’s one-to-many approach. If I’m streaming video from a news site, that content is sent to me and only me as it streams. It may be cached somewhere along the way, but ultimately, that stream is unicast and not shared. Also, the content provider must accommodate the bandwidth required for that stream, as well as the resources necessary to deliver it.

Clearly, this isn’t usually a problem. With a suitable broadband connection on a normal day, accessing content around the Internet is a relatively stable and consistent experience, depending on how adept the provider of that content may be in actually delivering the content.

November 30, 2015 brianradio2016

A little more than a year ago, I urged manufacturing companies testing the IoT waters to leave the work of bringing Internet connectivity to their traditionally unconnected products to those who understand what’s at stake. I’m not alone in my concerns that the IoT brigade will bring with it an avalanche of staggeringly insecure products that will find their way into our daily lives.

What we’re seeing right now is a hopefully imperfect storm of security challenges that, with any luck, will not result in global security and privacy breaches. In one corner, we have companies like Dell and Lenovo distributing computers with wide-open root CAs, allowing anyone with a small amount of skill to crib a certificate and spoof SSL websites, run man-in-the-middle attacks, and install malicious software on those Windows systems with nary a whimper from the “protections” in place to prevent such issues.

Dell, in fact, has done this twice over with some of its new laptops, the second of which was brought to light right before Thanksgiving. The upshot is that the company added this certificate to the Trusted Root Store on new laptops and included the private key. Thus, anyone can create certs that will be accepted as legitimate on those laptops. Good show!

In the other corner, we have several government agencies from around the globe arguing that they should have some kind of magical access key to all forms of encryption that will let them decrypt the data, while somehow keeping the “bad guys” out. Hanlon’s Razor states, “Never attribute to malice that which is adequately explained by stupidity.” But here we have broad evidence of both.

November 23, 2015 brianradio2016

I paused a TV show last week as one of those lower-third ads promoting the local newscast was displayed. It screamed, “Encryption preventing police from catching criminals, more at 11.” There’s nothing subtle about that, I pointed out to my wife, nothing at all. Clearly, this “encryption” stuff is very dangerous and should be made illegal, right?

Then the world was scarred by the attacks in Paris a few days later. Before any real news about the attacks made it to the mainstream media, we were already hearing how encryption was the reason these attacks succeeded. The New York Times posted a story to that effect, then pulled it and redirected the link to a completely different article about France’s retaliation. The Wayback Machine still has the original, which states, “The attackers are believed to have communicated using encryption technology.” This is the functional equivalent of stating, “The attackers are believed to have communicated using words or sounds.”

As it happens, we’ve since found out that the attackers communicated through normal, plaintext communications channels. (Note that Schneier’s title is somewhat of a joke — double ROT-13 encryption is no encryption at all.)

Yet we continue to hear from politicians and the mainstream media about how we need to add backdoors to encryption protocols, or do away with encryption altogether. According to Wired, the use of encryption will be a key issue in the 2016 U.S. presidential race. Given the general buffoonery that already surrounds the contest, I suppose that adding one more completely irrelevant and nonsensical talking point shouldn’t be surprising.

November 16, 2015 brianradio2016

If you have access to a system and network infrastructure testing lab, fully stocked with server and network hardware, consider yourself incredibly fortunate. The predominate corporate attitude has always been that infrastructure testing isn’t nearly as important as development testing, and since it tends to be expensive, it’s not going to happen. This leaves network and systems folks with much more seat-of-the-pants work than anyone else in IT or development — that’s just the way it is.

Most of us acknowledge we’re unlikely to put together a full lab with enterprise-grade hardware and software that mimics production to test rollouts and infrastructure changes, but we also know we can’t improvise these operations either (most of the time). We come up with ways to test things that aren’t as close to production as we might like, but go a long way to ferret out issues with a planned design or deployment.

For instance, if the goal is to configure and test edge security devices for a rollout to a pile of remote sites and you’re a network guy, you should know enough about Unix to build a VM or laptop into an Internet mimic and stage all the hardware in one place until everything is tweaked and tested. If you don’t have the budget for testing gear from a vendor such as Ixia, you should be able to put together a mockup to at least saturate network links and test interconnections, fail-over, and so forth. Open source tools such as packETH and iPerf, or even good old Netcat, can take you pretty far in this type of testing.

As an example of a simple testing method, if we needed to stage and test a half-dozen remote site firewalls that will be deployed, we could toss up a Linux VM (or a laptop) connected to a test network on one interface, and to a routable network with Internet access on another interface. A switch connected to the test network would be populated with the WAN links from each of the firewalls. The test VLAN interface on the Linux VM would then be configured with the defroutes of each staged firewall, IP forwarding enabled, and an iptables NAT configuration introduced on the routed interface.