June 28, 2016 brianradio2016

Sometimes Windows needs a fresh start–maybe a program’s gone awry or a file’s been corrupted. Luckily, Windows 10 lets you do this with a few clicks.

Windows 10 has an option where you can reinstall Windows and wipe your programs, but it keeps your files intact. Note that this won’t get rid of any “bonus” bloatware programs your PC vendor put on your computer before you bought it–you’ll have to do that manually–but it will get rid of any software you or someone else installed afterward.

Even though Windows says it’ll keep your files intact, it always pays to back up your PC or at least the important files before you do anything like this.

Ready? Okay. Hit the Start button and go to Settings. In Settings, select Update and Security, and in there, select Recovery.

At the top of the Recovery section you’ll see “Reset this PC.” Click the Get Started button–don’t worry, you’ve still got one more step–and then you get to choose an option. In this case, we’re choosing “Keep my files,” and the dialog box reminds you that this will remove your apps and settings.

Now, I’m not actually going to click this button, because that will start the process. Then you just sit back and let Windows do its thing. It may take a while. When it’s done, you should have a fresh Windows installation, and ideally, your personal files will still be right where you left them.

Windows installs are always an adventure. Tell us your craziest experience at answer@pcworld.com.

June 23, 2016 brianradio2016

While many businesses laud the benefits of cloud computing, some feel less than 100 percent confident in their ability to fully secure their cloud resources.

Is it any wonder? Your corporate network might link to multiple cloud services, run by different operators. Mobile users might be accessing cloud resources simultaneously over dissimilar WANs and device types. Some users and devices fall under your management domain; others don’t.

In fact, corporate data seems to be everywhere. It’s being copied, emailed, shared, and synced wherever users happen to be working. So it’s tough to know exactly where sensitive data is being stored and who has access to it.

How can you successfully enforce internal policies and industry compliance mandates under these conditions, particularly when another entity now controls part of your hosting environment? The answer is to use a CASB (cloud access security broker). You’ll need a certain kind — one with API integration capabilities — to do the job.

June 21, 2016 brianradio2016

In the last decade, Apple truly revolutionized client computing, moving it out of the boring Wintel PC space where nothing significant had happened for years. We got the iPhone and then the iPad, and with them very different ways to compute for work and pleasure, plus a break from the whole “IT owns everything” mentality that imprisoned users. We got Siri, a big step into the “Star Trek”/”2001: A Space Odyssey” computing future many of us grew up with. We got devices that interacted together as parts of something greater (aka liquid computing).

Then those all became normal, available from Google, Microsoft, and others. Sure, we get advances every year in iOS and, to a lesser extent, MacOS — but they’re typically incremental, and often pioneered by other platforms. Apple is now just as much a follower as a leader. That was crystal clear at this year’s Worldwide Developers Conference, a fairly minor affair — just like Google I/O and Microsoft Build before it.

Basically, Apple won the revolution and we now live in that world. In the new status quo, Apple is no longer the pioneer or insurgent but simply a leader in the new establishment.

That doesn’t mean Apple is doomed, as some pundits claim. Nor does it mean that Apple now drives the direction of technology, as other pundits claim. Both are extreme, reductionist positions. But Apple is navigating in new waters where its strengths may not be enough.

June 17, 2016 brianradio2016

IDG.TV | Jun 16, 2016

Your USB drive isn’t slow because you have too much stuff on it. It’s slow because it uses a slow storage format like FAT32 or exFAT. You can re-format it to NTFS to get faster write times, but there is a catch.

June 16, 2016 brianradio2016

The growth of Apache Hadoop over the past decade has proven that the ability of this open source technology to process data at massive scale and allow users access to shared resources is not hype. However, the downside to Hadoop is that it lacks predictability. Hadoop does not allow enterprises to ensure that the most important jobs complete on time, and it does not effectively use the full capacity of a cluster.

YARN provides the ability to preempt jobs in order to make room for other jobs that are queued up and waiting to be scheduled. Both the capacity scheduler and the fair scheduler can be statically configured to kill jobs that are taking up cluster resources otherwise needed to schedule higher-priority jobs.

These tools can be used when queues are getting backed up with jobs waiting for resources. Unfortunately, they do not resolve the real-time contention problems for jobs already in flight. YARN does not monitor the actual resource utilization of tasks when they are running, so if low-priority applications are monopolizing disk I/O or saturating another hardware resource, high-priority applications have to wait.

As organizations become more advanced in their Hadoop usage and begin running business-critical applications in multitenant clusters, they need to ensure that high-priority jobs do not get stomped on by low-priority jobs. This safeguard is a prerequisite for providing quality of service (QoS) for Hadoop, but has not yet been addressed by the open source project.

June 14, 2016 brianradio2016

Digital transformation, aka DX, is hot — and if you’re not doing it, your company will die and you will lose your CIO or IT leadership job. You’ll — shudderbe disrupted! Or fail the wrong side of the Innovator’s Dilemma. That’s the message over the last few months from consultants, pundits, and of course vendors.

But wait — wasn’t digital transformation hot in the late 1990s and again in the mid-2000s? Indeed it was. It’s hot again, mainly for the wrong reasons. That is, vendors want you to buy stuff because IT has been cutting back.

There is a very good reason to invest in digital transformation. But it’s not the digital transformation you’re usually sold.

First, here’s a typical definition of digital transformation that means the same things consultants and vendors have been saying for years: keep up with new technologies, and use them — only the technologies have changed:

June 7, 2016 brianradio2016

There’s a lot of sci-fi-level buzz lately about smart machines and software bots that will use big data and the Internet of things to become autonomous actors, such as to schedule your personal tasks, drive your car or a delivery truck, manage your finances, ensure compliance with and adjust your medical activities, build and perhaps even design cars and smartphones, and of course connect you to the products and services that it decides you should use.

That’s Silicon Valley’s path for artificial intelligence/machine learning, predictive analytics, big data, and the Internet of things. But there’s another path that gets much less attention: the real world. It too uses AI, analytics, big data, and the Internet of things (aka the industrial Internet in this context), though not in the same manner. Whether you’re looking to choose a next-frontier career path or simply understand what’s going on in technology, it’s important to note the differences.

A recent conversation with Colin Parris, the chief scientist at manufacturing giant General Electric, crystalized in my mind the different paths that the combination of machine learning, big data, and IoT are on. It’s a difference worth understanding.

The real-world path

In the real world — that is, the world of physical objects — computational advances are focused on perfecting models of those objects and the environments in which they operate. Engineers and scientists are trying to build simulacra so that they can model, test, and predict from those virtual versions what will happen in the real world.

June 6, 2016 brianradio2016

If you haven’t seen the HBO show “Silicon Valley,” it’s worth the time. Along with many jabs at well-known tech companies, it’s surprisingly accurate with regard to how IT and developers think and behave. It’s not perfect, but I can’t recall ever hearing a vim-versus-emacs debate in a popular television show. (Can you?)

That same episode used an argument of spaces versus tabs in source code as a plot device. I’m sure most people watching the show weren’t entirely sure what that meant, but it was carried off well enough to make clear it was a big problem. Frankly, to many developers, it is a big problem and has been for quite some time. Truly, however, vim versus emacs is a much holier war.

Besides the obvious “neato” factor of deep techie culture surfacing in a mass-market television show, there’s a bigger point to be made. The parody serves to underscore that some problems we face have multiple right answers, and we often waste an awful lot of energy in pointless debates. Also, it’s best to have a development style guide if possible, so at least there’s a standard arbiter of such issues.

Some aspects of IT have only one right answer: the right route in a core switch, for example, or the right entry in an ACL that makes everything work. But in other places, the rules aren’t so binary. A great example is naming, of every variety: naming of servers, storage, networks, variables in source code, fields in databases, keys in arrays.