September 26, 2016 brianradio2016

As we’ve come to expect from new versions of Windows Server, Windows Server 2016 arrives packed with a huge array of new features. Many of the new capabilities, such as containers and Nano Server, stem from Microsoft’s focus on the cloud. Others, such as Shielded VMs, illustrate a strong emphasis on security. Still others, like the many added networking and storage capabilities, continue an emphasis on software-defined infrastructure begun in Windows Server 2012.

The GA release of Windows Server 2016 rolls up all of the features introduced in the five Technical Previews we’ve seen along the way, plus a few surprises. Now that Windows Server 2016 is fully baked, we’ll treat you to the new features we like the most.

September 22, 2016 brianradio2016

Don’t be intimidated by webhooks. They are essentially the equivalent of “don’t call us, we’ll call you,” for the automated web. In the old days, if you wanted to act on changes to databases, websites, APIs, or online accounts, you would need to write a polling routine. Now, developers have a smarter option.

Why use webhooks versus polling? Polling involves writing an algorithm that checks the status of an endpoint for changes. Polling used to be the primary method for change notification at higher levels, but it’s not very efficient. Still, for low-level devices like printers, polling is your only option.

A study by Zapier found that 98.5 percent of polls were wasted because they nearly always brought back empty event data or polling messages. Meanwhile they soaked up system resources and had a tendency to swamp the I/O of the endpoint when polling intervals were set too close together.

Webhooks were introduced nearly a decade ago as a way to free up those resources. Webhooks send back notifications only when there are changes at the endpoint. Around 80 percent of developers surveyed preferred using webhooks over polling, but websites or APIs don’t support webhooks automatically.

September 21, 2016 brianradio2016

Does anyone even try to sell closed-source software anymore? It must be hard, when so many of the tools used to power the world’s largest datacenters and build the likes of Google, Facebook, and LinkedIn have been planted on GitHub for everyone to use. Even Google’s magic sauce, the software that knows what you will read or buy before you read or buy it, is now freely available to any ambitious developer with dreams of a smarter application.

Google didn’t used to share its source code with the rest of us. It used to share research papers, then leave it to others to come up with the code. Perhaps Google regrets letting Yahoo steal its thunder with Hadoop. Whatever the reason, Google is clearly in the thick of open source now, having launched its own projects — TensorFlow and Kubernetes — that are taking the world by storm.

Of course, TensorFlow is the machine learning magic sauce noted above, and Kubernetes the orchestration tool that is fast becoming the leading choice for managing containerized applications. You can read all about TensorFlow and Kubernetes, along with dozens of other excellent open source projects, in this year’s Best of Open Source Awards, aka the Bossies. In all, our 2016 Bossies cover 72 winners in five categories:

The software tumbling out of Google and other cloudy skies marks a huge shift in the open source landscape and an even bigger shift in the nature of the tools that businesses use to build and run their applications. Just as Hadoop reinvented data analytics by distributing the work across a cluster of machines, projects such as Docker and Kubernetes (and Mesos and Consul and Habitat and CoreOS) are reinventing the application “stack” and bringing the power and efficiencies of distributed computing to the rest of the datacenter.

September 21, 2016 brianradio2016

The Sleuth Kit (TSK) is a fairly comprehensive collection of tools for analyzing and recovering files from disk images, useful for postmortem computer forensics in a corporate investigation of unauthorized use, an issue of workplace harassment, or a criminal investigation by law enforcement. TSK is the tool to use to dig deep into the disk.

When it comes to forensics at the file system level, TSK combines a number of command-line utilities (including fls to display file names within a file system, fsstat to show file system statistical data, and ils to list metadata entries, among others) with support for common file systems (including NTFS, FAT, ExFAT, UFS, EXT, and HFS), allowing you to examine Windows, many Linux, and most Mac OS X systems. Need to go deeper? TSK also allows you to drill down to the bits of a hard disk image to see what may be hidden within.

Working hand in glove with TSK is Autopsy, a GUI-based tool for searching disk images. Autopsy, by default, will search for recent user activity, email, pictures, IP addresses, phone numbers, URLs, and other interesting file types and tidbits. You can have Autopsy search for specific keywords and regex strings, or use it to dredge up files that contain audio or video, a plethora of document types, or any number of executable file types.

Between TSK and Autopsy, you can be sure that any disk you examine will reveal its secrets.

— Victor R. Garza

September 20, 2016 brianradio2016

Microsoft has given up on the PC; mobile devices have won the war after not even a decade.

OK, that’s a bit extreme. But the PC is becoming simply another device, reinvented to work like a smartphone or tablet when it comes to application development, application distribution, device management, and security.

The two major examples of this shift to what’s called an omnidevice strategy are Microsoft’s Surface Pro tablet and the adoption of mobile management and security standards in Windows 10.

Is a Surface Pro a laptop or a tablet? Yes. That’s the point: The difference is situational, as the device can be used in either mode, yet it is the same “tabtop” device. Likewise, Apple’s iPad Pro and Google’s Pixel-C tablets, which no Microsoft IT shop would ever have considered to be laptops, can be used as laptops — but they don’t run Windows.

September 15, 2016 brianradio2016

Every company wants to guarantee uptime and positive experiences for its customers. Behind the scenes, in increasingly complex IT environments, this means giving operations teams greater visibility into their systems — stretching the window of insight from hours or days to months and even multiple years. After all, how can IT leaders drive effective operations today if they don’t have the full-scale visibility needed to align IT metrics with business results?

Expanding the window of visibility has clear benefits in terms of identifying emerging problems anywhere in the environment, minimizing security risks, and surfacing opportunities for innovation. Yet it also has costs. From an IT operations standpoint, time is data: The further you want to see, the more data you have to collect and analyze. It is an enormous challenge to build a system that can ingest many terabytes of event data per day while maintaining years of data, all indexed and ready for search and analysis.

These extreme scale requirements, combined with the time-oriented nature of event data, led us at Rocana to build an indexing and search system that supports ever-growing mountains of operations data — for which general-purpose search engines are ill-suited. As a result, Rocana Search has proven to significantly outperform solutions such as Apache Solr in data ingestion. We achieved this without restricting the volume of online and searchable data, with a solution that balances responsively and scales horizontally via dynamic partitioning.

The need for a new approach

When your mission is to enable petabyte-level visibility across years of operational data, you face three primary scalability challenges:

September 14, 2016 brianradio2016

Microsoft Edge has bookmarks, of course, but they’re called Favorites. I’ll show you how they work and how to move over your bookmarks from another browser.

[The rest]
You get to your Favorites in Edge by clicking the Hub icon, which looks like three horizontal lines of different lengths. This menu slides out from the side, and you hit the star icon to get to your Favorites list.

You’ll probably want to turn on the Favorites bar, too. In the Favorites menu, hit the Settings link on the right. There you’ll see an option for “Show the favorites bar,” which, as you can see, is Off by default. Just slide it to On, and you’ll see the bar appear in the browser window.

You can build Favorites from scratch by going to any URL and hitting the star icon in the address bar. You can edit the name and also choose whether it’ll show up in the Favorites bar or in a Favorites folder. I’ll put this one on the bar so you can see what that looks like.

To import bookmarks from another browser, click this button in the Favorites settings that says “Import your favorites.” This works for Explorer, Chrome, and Firefox, and it just drops those bookmarks into a folder in your Favorites list.

One of those folders will be called “Bookmarks bar.” That’s where all the websites you saved in your old browser’s bookmarks bar will be. To put them onto Edge’s Favorite’s bar, click and drag them to Edge’s “Favorites Bar” folder at the top of this window. You’ll also see them show up automatically in the Favorites bar.

Now that your Favorites are set up, you can learn more about Edge from our videos and articles at, or write to us at

September 12, 2016 brianradio2016

Open source software offers an economical and flexible option for deploying basic home, SMB or even enterprise networking. These open source products deliver simple routing and networking features, like DHCP and DNS. Plus, they are combined with security functionality, starting with a basic firewall and possibly including antivirus, antispam and Web filtering.

These products can be downloaded and deployed on your own hardware, on a virtual platform, or in the cloud. Many of them sell pre-configured appliances as well if you like their feature-set or support, but don’t want to build your own machine.

September 8, 2016 brianradio2016

Devops may be one of the haziest terms in software development, but most of us agree that five activities make devops what it is: continuous integration, continuous delivery, cloud infrastructure, test automation, and configuration management. If you do these five things, you do devops. Clearly, all five are important to get right, but all too easy to get wrong. In particular, continuous integration and continuous delivery (CICD) may be the most difficult devops moves to master.

Continuous integration (CI) is a process in which developers and testers collaboratively validate new code. Traditionally, developers wrote code and integrated it once a month for testing. That was inefficient — a mistake in code from four weeks ago could force the developers to revise code written one week ago. To overcome that problem, CI depends on automation to integrate and test code continuously. Scrum teams using CI commit code daily at the very least, while a majority of them commit code for every change introduced.

Continuous delivery (CD) is the process of continuously creating releasable artifacts. Some companies release to users once or even multiple times a day, while others release the software at a slower pace for market reasons. Either way, the ability to release is tested continuously. Continuous deployment is possible thanks to cloud environments. Servers are set up such that you can deploy to production without shutting down and manually updating servers.

Thus, CICD is a process for continuous development, testing, and delivery of new code. Some companies like Facebook and Netflix use CICD to complete 10 or more releases per week. Other companies struggle to hit that pace because they succumb to one or more of five pitfalls I’ll discuss next.