September 8, 2016 brianradio2016

Devops may be one of the haziest terms in software development, but most of us agree that five activities make devops what it is: continuous integration, continuous delivery, cloud infrastructure, test automation, and configuration management. If you do these five things, you do devops. Clearly, all five are important to get right, but all too easy to get wrong. In particular, continuous integration and continuous delivery (CICD) may be the most difficult devops moves to master.

Continuous integration (CI) is a process in which developers and testers collaboratively validate new code. Traditionally, developers wrote code and integrated it once a month for testing. That was inefficient — a mistake in code from four weeks ago could force the developers to revise code written one week ago. To overcome that problem, CI depends on automation to integrate and test code continuously. Scrum teams using CI commit code daily at the very least, while a majority of them commit code for every change introduced.

Continuous delivery (CD) is the process of continuously creating releasable artifacts. Some companies release to users once or even multiple times a day, while others release the software at a slower pace for market reasons. Either way, the ability to release is tested continuously. Continuous deployment is possible thanks to cloud environments. Servers are set up such that you can deploy to production without shutting down and manually updating servers.

Thus, CICD is a process for continuous development, testing, and delivery of new code. Some companies like Facebook and Netflix use CICD to complete 10 or more releases per week. Other companies struggle to hit that pace because they succumb to one or more of five pitfalls I’ll discuss next.

September 1, 2016 brianradio2016

The Iron.io Platform is an enterprise job processing system for building powerful, job-based, asynchronous software. Simply put, developers write jobs in any language using familiar tools like Docker, then trigger the code to run using Iron.io’s REST API, webhooks, or the built-in scheduler. Whether the job runs once or millions of times per minute, the work is distributed across clusters of “workers” that can be easily deployed to any public or private cloud, with each worker deployed in a Docker container.

At Iron.io we use Docker both to serve our internal infrastructure needs and to execute customers’ workloads on our platform. For example, our IronWorker product has more than 15 stacks of Docker images in block storage that provide language and library environments for running code. IronWorker customers draw on only the libraries they need to write their code, which they upload to Iron.io’s S3 file storage, where our message queuing service merges the base Docker images with the user’s code in a new container, runs the process, then destroys the container.

In short, we at Iron.io have launched several billion Docker containers to date, and we continue to run Docker containers by the thousands. It’s safe to say we have more than a little experience with Docker. Our experience launching billions of containers for our customers’ workloads has enabled us to discover (very quickly) both the excellent benefits and the frustrating aspects of Docker.

The good parts

We’ve been fortunate to interact regularly with new technologies, and although new technologies bring their own sets of problems, they have helped us achieve otherwise impossible goals. We needed to quickly execute customer code in a predictable and reproducible fashion. Docker was the answer: It gives us the ability to deploy user code and environments in a consistent and repeatable way, and it provides ease of use when running operational infrastructure.

August 25, 2016 brianradio2016

If IT today has a watchword, that word is “speed.” At the same time, IT is expected to protect the organization’s crown jewels (company finances and private customer records) and to keep services available and responsive. The message: Go faster, but don’t break anything important. Even Facebook has modified its motto on this front from “Move fast and break things” to “Move fast with stable infrastructure.”

In response to these pressures, enterprise IT teams are increasingly adopting the techniques in use at cutting-edge web-scale companies like Amazon, Google, Netflix, and Facebook, characterized as microservices. Simplistically, microservices attempt to take slow-changing, monolithic functionality and break it apart into many small pieces that can be changed independently of one another, with the traditional change management role moved away from “command and control” into a highly automated deployment model that often resembles barely controlled anarchy. While this approach can look chaotic, it is reinforced by strong rigor around safety of change and responsiveness to the user.

Taken at face value, the microservices approach seems like a logical method to evolve a traditional enterprise architecture to satisfy the market pressures of speed. Deconstructing monolithic architecture into smaller, independent pieces seems to many organizations like a natural evolution of SOA. In many regards, this is true, but it can lead to putting the focus on the wrong areas.

SOA lessons learned

One of the major challenges enterprises faced when attempting to gain value from the SOA approach was a tendency to overthink the design of services and to become paralyzed by debates about service granularity and adherence to pure architectural principles. While the design aspect of SOA was important, it caused many organizations to fail to deliver on the promises of more efficient IT delivery, while they centered on the early design stages and had to rush through the operational aspects. Today, as teams adopt microservices, operational considerations are once again overlooked.

August 23, 2016 brianradio2016

If you’ve been in the technology industry for more than a decade, you remember the Wintel world that was: PCs from Hewlett-Packard and Dell reigned, Windows was the only operating system that mattered, and the Wintel duopoly would live as long as Rome. In 2005, the still-struggling Apple dropped the PowerPC and embraced Intel chips; about the only sign of trouble was IBM getting out of the PC business, selling it to China’s Lenovo — but that was framed as the fall of an American icon that was stretched in too many directions and the concurrent rise of China, not related to the PC itself.

Then it all fell apart.

Today, the companies that matter are not the old Wintel hardware powerhouses, but Apple, Google, and Samsung — and Microsoft, thanks to a turnaround piloted by its current CEO, Satya Nadella. IBM’s pivot away from the PC helped it focus enough to get a new wind as a systems integrator and back-end provider — it essentially decided to play a different game.

Of the three Wintel hardware giants, HP and Dell seem destined for the dustbin of history, though they can persist as is for years through a slow decline. Intel may — or may not — turn around, Apple-like, by conquering new markets. It’s flailed for years but still keeps trying, with the advantage of having retained an innovative engineering culture that Dell never had and HP long ago jettisoned.

August 18, 2016 brianradio2016

Hyperscale public clouds are well established as the new platform for systems of record. Providers of ERP, supply chain, marketing, and sales applications are today predominantly or exclusively based in hyperscale public clouds. Oracle alone has thousands of customers for its front-office and back-office SaaS. And the list of customers is growing at a rate far exceeding that of traditional front-office and back-office applications.

Hyperscale public clouds are also, of course, a proper place to run new cloud-native applications that enhance or extend those system-of-record applications. These new applications are architected differently. While systems of record are typically large, monolithic applications running in virtual machines in the cloud, cloud-native applications are usually written as microservices, packaged in containers, and orchestrated to deliver a complete application to users. Among the benefits of this approach:

  • Faster innovation
  • The ability to provide specific customization for each application use
  • Improved reuse of code
  • Cost savings versus conventional virtualization due to the greater deployment density of containers and more efficient consumption of resources

All of this is common knowledge, endlessly touted, no longer debated.

Less discussed, however, is the galaxy of applications that aren’t necessarily suitable for centralized hyperscale cloud deployment. Instead, these applications thrive in distributed computing environments, potentially based on cloud services, at or close to the edge of the network. These applications are systems of engagement and systems of control.

August 16, 2016 brianradio2016

Hackers of the world, unite! You have nothing to lose but the lousy stock firmware your routers shipped with.

Apart from smartphones, routers and wireless base stations are undoubtedly the most widely hacked and user-modded consumer devices. In many cases the benefits are major and concrete: a broader palette of features, better routing functions, tighter security, and the ability to configure details not normally allowed by the stock firmware (such as antenna output power).

The hard part is figuring out where to start. If you want to buy a router specifically to be modded, you might be best served by working backward. Start by looking at the available offerings, picking one of them based on the feature set, and selecting a suitable device from the hardware compatibility list for that offering.

In this piece we’ve rounded up six of the most common varieties of third-party network operating systems, with the emphasis on what they give you and who they’re best suited for. Some of them are designed for embedded hardware or specific models of router only, some as more hardware-agnostic solutions, and some to serve as the backbone for x86-based appliances. To that end, we’ve presented them with the more embedded-oriented solutions first and the more PC-oriented solutions last.

August 16, 2016 brianradio2016

Chances are very high that you have one at your desk: a phone connected to a landline or a PBX. It may be a basic model, or it may be fancy, with a display screen, employee director, and configurable voicemail greetings. It may be analog or digital. Whatever it is, it sits on your desk, so it has little utility when you’re elsewhere.

And when you’re away, you probably use a personal cellphone or (less common) one provided by your employer — to check your desk phone’s voicemail, if nothing else. How 1980s!

But most of us still need a desk phone, for many reasons:

  • Cell service is variable, and it’s often unreliable within buildings. Plus, using cellphones adds cost that many employers don’t want, and BYOD approaches mean business communications goes to personal phones, which in fields like sales is dangerous for the company.
  • Wi-Fi calling on smartphones is in its early days, and cordless Wi-Fi phones are not widely deployed. Broad deployments also mean significant upgrades to corporate networks — and they don’t work outside the office.
  • Plus, we don’t have an infrastructure at all our desks to keep those smartphones operational throughout the day. Charging stations and headsets at least need to be ubiquitous if we’re to rely on smartphones all day for voice; their batteries can’t take that usage, and they’re ergonomically awkward for extended use.
  • Desktop phone apps and conference services tether you to your computer via a USB cable, and routing the audio properly is often confusing on both Windows PCs and Macs. Logging into such services across multiple devices gets into geek territory, leaving most employees out in the cold. And such phone and conference apps often work poorly on mobile devices, despite improvements in recent years. Oh, and those that send you text transcripts or audio attachments of your messages litter your email inbox with usually irrelevant calls from scammers and salespeople.

Basically, the new digital technologies are inadequate, so the trusty old phone survives. It even has an advantage: If you’re an exec constantly call-spammed by salespeople, you can turn off the ringer and send all calls to voicemail, using email, a second private line, or a personal cellphone for taking legitimate calls. The desk phone becomes the phone you dial out on or use for conference calls.

August 11, 2016 brianradio2016

The first generation of API management tools has been focused on helping developers publish APIs for other developers. Developers are highly successful at publishing APIs, so much so that now there are millions of public APIs available. However, we’ve been less successful at getting downstream developers to actually use our APIs. External adoption of most APIs has lagged greatly behind expectations.

API publishers have learned that there are many facets to driving adoption and usage of their APIs. The next great challenge in API management revolves around one question: How can we make our APIs more compelling to other developers?

Integration is the leading use case for most APIs, and some of the best APIs have common traits that support integration use cases. Four that are proven to boost your API’s adoption are accommodating custom data, providing machine-readable docs, applying rich metadata, and exposing webhooks. These features will increasingly become more critical in driving adoption of your APIs as developers select not only the easiest but the most functional APIs to accomplish integration tasks. Ultimately, these are the traits that increase your API’s rate of adoption because they make life easier for your customers.

1. Accommodate custom data 

SaaS apps make extensive use of custom data objects and custom data fields. Many SaaS apps provide powerful capabilities for users to create, update, and delete custom data, but most of these apps do not provide similar custom data management facilities through APIs. There is a series of steps to take to better accommodate custom data through your APIs, from discovery to field-level CRUD to reaching full CRUD operations for custom objects.

August 4, 2016 brianradio2016

When it comes to enterprise application development, security is still an afterthought, coming in right before a release is deployed. The rapid adoption of software containers presents a rare opportunity for security to move upstream (or in devops-speak, to facilitate its “shift left”) and become integrated early on and throughout the software delivery pipeline. However, most security teams don’t know what containers are, let alone what their unique security challenges might be.

Software containers can be thought of as lightweight virtual machines with much leaner system requirements. Containers share the host OS kernel during runtime, making them exceptionally light (only megabytes in size) — and fast. Containers take mere seconds to start, as opposed to a few minutes for spinning up a VM.  

Containers have been around since the early 2000s and architected into Linux in 2007. Because of the small footprint and portability of containers, the same hardware can support an exponentially larger number of containers than VMs, dramatically reducing infrastructure costs and enabling more apps to deploy faster.

However, due to usability issues, containers did not catch on until Docker came along and made them more accessible and enterprise-ready. Now containers — and Docker — are red hot. Earlier this year, JP Morgan Chase and Bank Mellon publicly stated that they are pursuing a container-based development strategy, proof that containers have as much to offer traditional enterprises as cloud juggernauts like Google, Uber, and Yelp.

August 2, 2016 brianradio2016

There’s been a lot of hand-wringing in Silicon Valley about its diversity problem: the tiny percentages of blacks, Hispanics, and women hired in the heart of the tech industry. Companies like Apple, Google, and Yahoo now publish self-shaming diversity reports that show they’re anything but. (Silicon Valley is also very ageist, but the diversity reports haven’t gone there yet.)

The implication is that Silicon Valley is racist and misogynistic. That’s not exactly right — it readily accepts Asians, both native and immigrants. And it’s long been welcoming to gays.

Why poor diversity matters to Silicon Valley’s bottom line

Despite its purported desire and hand-wringing over diversity, Silicon Valley isn’t a full part of the real world its technologies are supposed to serve. Silicon Valley’s particular mix of favored people biases it toward rich people’s convenience products and away from the needs of most of the world.

I believe that one reason nearly every startup seems to be working on mommy-replacement technologies — from Uber to TaskRabbit, from Soylent to Mark Zuckerberg’s planned AI household servant — is due to its bias toward professional-class, coddled white and Asian young men. Or they’re working on messaging apps and chat bots, so they can keep hanging out with their friends. Their worldview permeates the products they create.