Search This Blog

Friday, April 25, 2014

Descent into madness?

Working alone from the quiet solitude of our home in the Russian River redwoods sometimes comes with the danger of losing touch.

Of reality.

Or maybe this was just me taking too much benadryl to fight the poison oak.


Wore the tie the kids gave me to work today. Here I am harping at the worker bees to quit wasting time and get back to it, or they'd all be working this weekend.

Power tie and power... robe?

And now, I will transform into business ninja using ONLY THE POWER OF MY MIND...

Ta daaaaa!



Wednesday, January 29, 2014

H2T shall henceforth be called... Ubiquity!


... because it's easier to say. And sounds cooler.

That is all.


Google Cloud Print - A Remote Desktop Solution For "Anywhere" Printing

It isn't often that I get giddy about a tech discovery (or re-discovery in this case), but when an elegant and effective solution to a significant technical challenge comes along, I experience Nerd Joy. This week's Nerd Joy brought to me by Google's updated Cloud Print service.

I came across Cloud Print quite awhile ago, back when Google first released it I think. My impression was that it was interesting, but not quite useful enough to really try and fit into my infrastructure. This week I happened to stumble across information about the updates Google made to the service back in July of 2013

Specifically, Google now provides a mechanism for enabling access to remote printing via a Windows service that runs in the background on a client, which administrators can then associate with a "master" print user in Google Apps. This user can then share those printer instances tied to that account with the rest of the users in the Google Apps domain, much like access to folders and files in Drive can be shared.

I installed and tested this service on a couple of machines at different sites, then shared the printers that were added to a test account. Printers can be shared per user, group, or publicly via a link that, when clicked, prompts the currently logged in Google Apps user to add the printer to their collection. While this mechanism works most effectively when someone is using Chrome, I also tested printing while logged into the Apps test account using Firefox.

The fairly minor catch in this process is that the printing is not handled by the local Windows printing service, but rather a document not already in Drive must be added to a print queue via upload.



A Windows Cloud Print "driver" is available that installs a virtual print driver (much like the "universal" print drivers from HP), but initial testing did not have the same success as the uploaded documents. When the virtual print driver was selected as the target in a Windows application print dialog, the print job got stuck in the queue with a status of "in progress". From what I can gather, this is a commonly encountered wrinkle. However, it is FAR from what I would call a "show stopper".

The reason this has me so giddy is because remote printing has been the one function people count on that's not reliably available using Remote Desktop. Yes, RD can "redirect" a printer connected to a local machine so that the printer instance is available for printing from the remote session, but unless the remote desktop session in question is logged into a client behind the same router as the local machine, printing response is glacial not even available.

I have toyed with the thought of making one or two printers at each site available via port forwarding and enabling IPP, but my impression is that this would be no less cumbersome to employ for remote desktop use because it still requires print driver installation on each potential RD host. Not manageable.

With Cloud Print, not only can I skip the driver installation requirement (or hair pulling experience of driver compatibility on servers) and messing with GPO to get Easy Print 'working", but I can also manage permissions and enable end user "self-install" of Cloud Printers with a single internal web page that provides a link to each printer that staff can add as needed.

Installing the service on one machine at each site that has multiple local networked printers installed is dead simple, and just WORKS. Staff who will be using remote desktop from multiple sites can be motivated to take that extra step to print if it means printing is now a more responsive function and they don't have to email themselves a document, then download it to the local machine (completely negating the security/privacy feature of having a remote desktop in the first place), then try and figure out which printer is available from the local machine, then remember to delete that downloaded document... et cetera, et cetera. 

Typically, in my experience, if a workaround procedure takes more than one extra step, it's going to cause frustration and ultimately be avoided. 

Providing all the functionality of a local desktop session within the privacy/security of RD is the ultimate goal for the H2T project. Google Cloud Print provides the means for staff using Remote Desktop to keep ALL their work functions encapsulated in the RD environment. Furthermore, GCP enables a critical enterprise function to be managed within Google Apps, removing yet another complicated piece from the Microsoft Server puzzle I am trying to simplify.

Finally, in the "icing on the cake" department, there are GCP apps for both Android and iOS, making the prospect of printing from mobile devices a very realistic option.

On we go!

Friday, January 17, 2014

New Year Check-In - H2T progress!

It's 2014 and the H2T project has moved past the POC phase: a Windows 7 VM hosted on a repurposed IT workstation running Server 2012. 

This shared remote desktop host is being used by 4 staff members. These staff members were previously using Windows XP, and not only made the transition to remote desktop without issue, but also did well with the move to Windows 7 (I did a lot of work to make the desktop navigation similar). Granted, 2 of the 4 staff had been using RD to work from home for the last year already, but this was still a significant change for them. A benefit to those who had been using it for awhile, their home RD experience was improved with direct connection to the RD host.

The POC was a 2 month initiate and monitor process. I encountered some issues (Firefox does not allow for concurrent use), but discovered new ways to refine the management of user profiles (use Portable FF, place in folder of Default profile and shortcut on desktop, multi-user FF!!!).

There were some refinements to the remote desktop client config needed prior to finalization of the Thinkiosk configuration for the converted machines used by the 4 staff. I found some discrepancies in RDP version, and variations between RDP on 7 and XP. Additionally, there were workarounds necessary in the setup of RDP for the Thinkiosk client configuration. These have not amounted to show stoppers, just more insight about the nuances that sometimes drive this effort. 

Getting to the point where calling up RD from the kiosk interface never remembers data from one use to another (use /public in RDP shortcut config of Thinkiosk) was the goal. I am now sufficiently comfortable with the Thinkiosk client installation and configuration that I am moving forward with the first H2T kiosk image.

For much of the project, my intention has been to avoid rebuilding machines. Having spent what I would call "too much tinker time" on the first 3 XP conversions to kiosk mode - removing old applications and data, updating and securing the OS - I have concluded that it will be faster to rebuild a machine with Windows Embedded (what I keep seeing Win 7 ThinPC called), pre-installed generic config for ccarcvpn (not auto-connected), and generic Thinkiosk config applied. I also installed the most current version of IE (11) - the browser called up when the kiosk interface loads - and installed OS updates to current. There is no other software on these machines; no Flash; no Java; no Adobe Reader. Nothing to monitor or undo later. Simple.

Yesterday I built a demo "kiosk" PC using a miniITX form factor motherboard (CPU included) and case, a 64GB SSD, optical drive, and 2GB of RAM. The optical drive is optional for most cases with a machine like this. External drives can be made available to departments that need them regularly. Because this was my first MiniITX build it took a little longer to put the parts together (GIANT HANDS) but box-to-boot time was still only about 30 minutes.

The bottom line for this machine without an optical drive is $207. I put WE and Thinkiosk on it. Smooth like butter. This will be the de facto kiosk client hardware configuration for new machines.

Next step is to image this new machine and apply that image to an old Dell machine (Dimension E3100s and E310s). It will be interesting to clock the rebuild process. I can't imagine it will take anywhere near as long as decommissioning software on XP machines by hand. The bugger will be with drivers, as the kiosk image is a pristine build with only the drivers for current hardware tucked into the OS data.

/goes off to do some braining/

Later The Same Day...

I just did a rebuild on an old Dell with the kiosk image and a current batch of drivers from Driverpacks.net. I was entirely unprepared for the speed of this rebuild. The build image is about 6GB, easily 10GB smaller than the existing and bloated Windows 7 images. From the time I hit go in Acronis to start the rebuild, it was about 9 minutes til I was clicking through the kiosk interface and accessing the web and remote desktops as hoped. Didn't get nagged for drivers. No devices were left out. It just worked. It was what I imagine loading an OS with PXE to be.

Nine minutes. Twenty minutes less than the average time for the previous generation of builds.

Scrumtrillescent.

Updated 01.29.2014 - The H2T project is now called Ubiquity.

Monday, December 23, 2013

Followup thoughts about screencasting as a documentation tool...

I am convinced this is the very best way for me to make some serious headway on my documentation brain dump. After 4 days, I have captured 2 hours and 16 minutes of screentime. This is the video library so far...


I am trying to come up with a naming convention that lets me group the list of a category of video together, such as "Quick Tour" and "Docuvid". When those terms or phrases are searched on Drive, they display a flat list of all "chapters" in that series. The simpler the naming convention, the better. 

The benefit of using this technique to group these together with search is that the files can be located multiple ways. They can be located in the tech library and opened by drilling down through folder structure, which is arranged in what seems like the most logical fashion, but can get changed as the documentation structure matures. The search approach makes their logical location irrelevant, and is therefore more suited to a growing and changing layout.

After I did a few recordings, I shot a note to our training coordinator with a link to a few samples. I suggested that perhaps the quick tour concept would help with training for all staff, on a number of different topics. All it takes is time and money, right?

The more of these that I make, the better I feel about getting it done. Although screencasting doesn't make the IT training and documentation material magically create and direct itself, it does make the process of capturing information much more fluid.

Thursday, December 19, 2013

Quick Tour IT training and documentation videos... or How Screencasts Saved My Sanity!

Documentation = EXCITEMENT AND GLAMOUR!

Right? I will be the first to admit that I am not exactly fond of doing documentation for my job. It's mind numbing, even to a geek like me, but it hasta get done!

By documentation, I mean the tracking, recording, and updating of information on all the bits of the agency IT infrastructure; all the router configs, IT service logins, server setups, network addresses, software and hardware configurations, procedures, protocols, and requirements for all the doohickeys.

I have done significant documentation with Google Docs and Sheets. That is certainly a more immediate, more "malleable", and focused view on any given detail of IT knowledge, but to put that kind of documentation together requires a concentrated effort free of distraction. It must also contain a certain amount of Beginner's perspective in order to avoid leaving out "assumed facts". It is hard to put those detail in a "typical usage scenario" context. Producing screencast videos of these details shows the workflow AND captures details on screen that can easily be neglected (or presented in such density as to be overwhelming) in printable docs.

I have been on a mission this week to produce one or two "quick tour" videos a day of mission critical services and admin consoles using Screencast-o-matic. The videos are between 5 and 10 minutes long usually, and are uploaded to the Google Drive directory for my staff IT account. The documentation is then securely shared with select senior staff as contingency. Finally, I created a subdomain to consolidate and make the entire list of videos accessible at quicktour.arcofcc.org.

Being a mostly solo support person with the majority of knowledge trapped in my skull, I am understandably concerned about providing the MOST transfer of that knowledge to a form easily accessed and digested by someone besides me if the need arose. If I am doing this right, someone with a reasonable amount of Windows and WAN/LAN experience should be able to spend a day or two watching these videos and have a solid grasp of the lay of the IT land around here if the someone who is me is not available.

If there's one thing I strive for in this job, it's to be prepared. An IT department cannot be prepared without documentation. Producing these kinds of videos is not all that needs doing to achieve this goal, but it serves a lot of information efficiently. I predict that this will help me fill out the agency "tech library" in record time.

Not that I'm going anywhere!

Thursday, October 24, 2013

Simple vs. Efficient in Desktop Management Strategy

In my time as ARC IT Guy, the quest to Keep It Simple has been a driving force behind many of the choices I've made about how the infrastructure developed here. Admittedly, Simple to me has an element of "existing familiarity" when it comes to implementing tech, and being familiar with a system already reduces the resistance I have to finding a fit for it long-term. Simple also takes into account the need to make IT administration roles and duties accessible and quickly absorbed if a backup IT person is brought in to temporarily or permanently serve as IT Coordinator. SAManage, as our ITSM platform, demonstrates this principle.

If I have to figure in the time cost to learn something new, that weighs heavier in the evaluation of new IT components for long-term deployment and management. Having said that, there are times when Simple starts creating more work for me if it's going to scale up... or the same/less amount of work long term if we adopted a new approach that required significant learning and labbing curve for the person charged with implementation. The test of Simple vs. Efficient is popping up a lot with the H2T project, and mostly in the context of Active Directory, DNS, and domains in general. We do not currently employ any of those mechanisms in our environment.

As I unroll the map of the H2T project, and also listen to how others have seen this same kind of effort succeed or falter, there are certainly indications that reconsidering my stance on AD and DNS adoption here would be in our best interest. With the proliferation of these server features and the platforms they enable, there is a better chance of emergency IT support availability from people already versed in their management. It is now incumbent on me to make sure the Simple is more a reflection of well-established strategies for managing resources at our level, and in the way we use those resources.

And so, I make room in my brains for a learning binge. That string around my head in the blog header is all that holds it in some days!


Friday, October 18, 2013

Windows 7: A multi-profile, concurrently accessible RDP host? Hypothetically...

Say there was a geek in a hypothetical IT lab situation that discovered how to enable Windows 7 as a concurrently multi-user RDP host. A user density of 7 per virtual desktop is the goal. Said geek had enough Windows 7 licenses to cover each user instance connected to a host, but wonders... 

Is this skirting a hard EULA violation worthy of vendor wrath? If a lab discovers how to make this work, could said shop cover the additional profiles on a single machine using CALs vs full Windows 7 licenses going forward?

What does the EULA for  Windows 7 Pro say about all this witchcraft?

From Section 3 
ADDITIONAL LICENSING REQUIREMENTS AND/OR USE RIGHTS.


f. Device Connections. You may allow up to 20 other devices to access software installed on the licensed computer to use only File Services, Print Services, Internet Information Services and Internet Connection Sharing and Telephony Services.
g. Remote Access Technologies. You may access and use the software installed on the licensed computer remotely from another device using remote access technologies as follows:
· Remote Desktop. The single primary user of the licensed computer may access a session from any other device using Remote Desktop or similar technologies. A “session” means the experience of interacting with the software, directly or indirectly, through any combination of input, output and display peripherals. Other users may access a session from any device using these technologies, if the remote device is separately licensed to run the software.
· Other Access Technologies. You may use Remote Assistance or similar technologies to share an active session.
Also figure into the equation that all end-point RD guest kiosks will be running as Windows 7 ThinPCs, and the hypothetical shop has the Software Assurance to migrate the full OS to a virtual machine.

Pondering this frankensteining of the go-to desktop for this shop, questions begin to arise.
  • Why would a lab even go down this road? 
  • Why not just go the straight route and get CALs on Server 2008? 
  • Is Server 2008 overkill for the desktop experience? 
  • Can 7 be to Server 2008 what ThinPC is to 7, in the context of simplified, stripped down multi-user virtual desktop hosting roles? 
  • Does the Remote Desktop/Other Users feature of this agreement validate the course taken by this mad scientist?
To that last question, hypothetically, yes.

Wednesday, October 16, 2013

HipChat Lightning Review - Is ARC ready for a group chat platform?

I was perusing Google Apps Marketplace this morning to see what's new out there and found HipChat, a multi-platform business group chat service. I am not sure why this one caught my eye initially, except that it had Google Apps integration. I watched the peppy-background-music video and then goog'd hipchat nonprofit, whereupon I discovered, for us, it is free.

OK, so it integrates with GApps. How much do we use the baked in IM in webmail? Would we used it more if a chat room space was available for multiple users? Would it really cut down on frivolous email chains? That is what remains to be seen. I signed us up. Sure enough, after approving API access for our domain to HipChat, it showed up in the More menu.

Ease of deployment is at the top of the Good Stuff list.

Staff clicks through More > Hipchat and is greeting with a form asking for Job Description and a password. I was somewhat puzzled by the prompt to set a password since it's supposed to be passthrough auth, but I used my agency email password thinking it would then enable passthrough. In any case, the app is where it needs to be in webmail, and all other login can be done in the background.

I set up a chat room for 1340arnold 2 accounts and took a second to figure out how to use the chat window. Found that and started a chat. First thing to configure: disabling the new chat audio bell. Here's a screenshot of settings for the webclient...



Yes, definitely sound off.

I am going to toss this out there for admin staff to try. My sense is that it could be easily adopted, simply because of how pervasive IM has become. I don't want it to become burdensome. If it truly helps cut down on email, then it has value to staff in general.

From an admin standpoint, it automatically recognizes my admin status in Gapps when I log into the web control panel. At the top of the window there is a tab for "Group Admin" where more granular control of permissions can be assigned and managed. Other management modules include individual configuration of notification parameters and paths, browsing of rooms and users, and deletion of account. 

I am curious to see if this gets a good reception and becomes useful. For the cost, it's worth the effort. Staff can access it by going to chat.arcofcc.org.

Suggested uses...
  • File sharing between staff and guests, staff and staff
  • New employee assistance - newhires can ask questions on monitored rooms to get answers about how things work around here; guided answers can be eventually collected into a FAQ, but chat history is available to search in the mean time
  • New room creation request channel
  • Join request channel for private room access
  • Program, workgroup, or task specific rooms can provide support for staff in every program; facilitates discussion of protocol, method, tools, and guidelines
  • Broadcast information for mobile groups and teams without impacting email
  • State of The Agency broadcast channel
  • HipChat allows for guest access via private URL to specific rooms, enabling instant support channels for families by ARC staff or having group discussions with staff from other agencies 
  • Private channels for management at each level
Honestly, I am not sure this will be any more useful than Google Groups has been, but IM is less structured than Groups, and as such less cumbersome to manage. That reminds me, I should figure out a good use for Groups. A plus about Groups is that there's a Manage option in user permissions and no such equivalent in HipChat that I saw. There is also no obvious "ban" option if guest mode is enabled for a HipChat room, other than to disable and re-enable guest mode (which does generate a new URL if re-enabled).

In summary, HipChat is an interesting option for expanded agency and community collaboration services, falling somewhere between Google Talk's one-to-one approach and Google Groups' "bulletin board" platform on the function spectrum. 

Does anyone really want to manage or absorb another information input source? Adopting new communication tools requires that those tools offset the overhead of adaption with rapidly realized USEFUL benefits in the span of time it takes to learn where all the buttons and options are. Too much for us right now?

Time will tell.


Edit 10-17: A colleague pointed me at this review of Hipchat and 2 other group chat options. Looks like I picked the right one.

Sunday, October 13, 2013

Product Impressions, First Look: ThinKiosk v4 by Thinscale Technology

I have frequently paused to consider doing a product review over the last couple weeks, as I evaluate and digest the options available for managing kiosk/VDI based interface across the agency. 

I have come to learn:
  • there are several interesting contenders in the field of kiosk PC management, and they range from too expensive to free (openthinclient, roll-your-own interface with HTA, several Linux options)
  • having a clearly defined goal and project parameters, with a logical progression of implementation laid out, makes a difference in refining candidacy for adoption
  • writing things down is helping me look at our options with a better sense of organization
  • I should probably use a gantt chart or mindmap at some point
  • I must give more weight to simple approaches and figure out how to measure the workload required in both initial deployment and long term management of new components to my administrative duties
  • I am enjoying this process more than I would have thought
  • there is soooooomuuuuuchmoooooooreto learn
In the case of ThinKiosk - a suite of end-point client profile creation, deployment, and management tools I am currently taking a run at - I am stopping my flurry of research and eval to give an initial impression. Version 4 of this suite released in September of 2013 after a stretch of intense revision from previous versions by Andrew Morgan and his team.

So what features/aspects/problem-solving/superhero bits does this product have that compelled me to write it up? Certainly there are other products with very similar functionality, but these are things that grabbed me right off about ThinKiosk. 

Keep in mind as you read, these are observations from the perspective of a non-profit IT admin lone ranger with limited resources, and a completely full plate.

And in no particular order, these are the things I appreciate most so far:
  • RESPONSIVE DEVELOPERS - this what gets me to recommend spending money on software for our agency, when there are free options for almost every function an IT shop oversees (but which can end up costing more in time and effort to prep the components and put pre-requisites in place).
  • client and server both Windows-based. Windows I know, fiddling with Linux is not something I want to take on right now
  • ease of installation and config for all components (though not on my first try)
  • the LACK of requirement for AD to be incorporated to make it work
  • a central repository for machine interface profiles that isn't AD-based
  • an end-point remote control feature
  • somehow enables an admin to make the machine more secure and more accessible at the same time, in a ridiculously straight-forward way... without AD
  • SILENTLY deployable client with command line options as msi
  • it could very well let me do the 40 remaining desktop XP upgrades remotely
  • full screen shell alternative to explorer, with auto-login, customizable and secure enough to create a dual guest/staff interface to appropriate resources
I have spent a few hours now digging around in the documentation, pouring over the support forum, reading product literature, installing, configuring, testing, cursing, uninstalling, and reinstalling... AND I managed to get the dishes and vacuuming done during all that!

Although my initial installation met with complications based on my own configuration of desktops and network using VPN, the reinstallation on another server using a different network ingress, functions the way I would expect. Accounting for the learning curve, the server and first installed client (on a remote Windows 7 x86 PC at the office) do everything I ask.

To expand on my list above...

As much as I would like to give Linux a place in our infrastructure (for both economic and platform flexibility reason), I have to be practical and know when to bail on an experiment with Windows alternatives. I have to be able to see a long term management cost, and any possible overhead during rollout that would cause me to double back and start from scratch. I encountered this a couple of times with VPN development and deployment, when a solid product or vendor that smoothly integrated early on and held up in production, later developed some fatal shortcoming and I had to start over.

Having an easy installation, for both client and server, is key to making progress and being able to dedicate attention to nuances of configuration before blasting out to the universe. Getting the basic installation down and having a handle on the configurations quickly is what lets me get to putting the product through it's paces. If a product shows enough promise, I will slog through a few hurdles to make it work. ThinKiosk has been fairly well-behaved for a Windows 2008r2 server and Windows 7 x86 client install.

The frequency of Active Directory being featured on my list is directly proportional to how much I am really trying to avoid having to figure it out RIGHT NOW, and add to the prereq list. Don't have that luxury of time or brains to spare yet. Having to consider setting up AD and DNS puts a candidate product in the same class as having to mess around with Linux, in terms of effort cost. Don't get me wrong, I would like the user and configuration management goodness that AD represents, but it has always seemed like beyond the scope of what I can manage for the agency. Maybe someday. For now, with ThinKiosk, that is a non-issue.

I have gradually done what I can to centralize critical IT functions over the last 6 years when the opportunity or solution presents itself. There have been 2 key developments for IT in the last year for the agency that open the doors for the centralization of more mechanisms: upgraded data service at most of our sites, and a VPN infrastructure. In a project such as our H2T initiative, I am taking on an aspect of the desktop experience that hasn't needed as much "hands on" after a deployment as this will. To scale this rollout up after the initial tweaking, a mechanism to manage the interface on each machine from one place is the only way I can keep on top of this in the long term. ThinKiosk's management console makes client configuration and profile deployment very easy.

I would argue that there is no way to effectively manage an IT infrastructure at any size deployment without a remote control tool for remote troubleshooting. Sometimes there is just no substitute (or amount of patience) for trying to talk someone through navigating Windows. Since being there in person for this is very much not an option for me with 11 sites spread all over the county, I need this "be anywhere" magic in the mix. One of ThinKiosk's major benefits is the remote control feature that allows an admin to shadow the client, even when they are in a remote desktop session. I have been testing another product from IntelliAdmin that provides the same mechanism from the other side of the remote desktop session, on the host. Both have their place and value in my toolbox. I tend not to think one can have "enough" remote connection options, honestly.

One of the biggest challenges I have faced in my desktop support career is finding that balance between maintaining PC security and giving folks the option of customizing their desktop environment. If you don't lock things down tight enough, or conversely lock them down too tight, support call volume WILL increase. I don't want to get calls about either a compromised desktop OR an app that won't start or webpage resource that won't work because of UAC. 

The challenge ThinKiosk takes on, and subsequently conquers, is enabling an endpoint framework that thoroughly secures the OS, but supports the means for anyone using the client to access information or services easily, be it staff, management, or visitors. One of the options available with the client is auto-login to a profile created by the ThinKiosk install (the option for using other existing logins is also available). It can allow basic functions such as web browsing (or use of any other app on the PC as set by the kiosk profile) that require no special permissions, and also pre-configured remote desktop shortcuts that connect staff securely to VDI sessions. As such, the end point client becomes much less of a potential attack surface, and existing staff desktops become even more secure. Simply amazing (to me, anyway)!

The ThinKiosk client can be installed using msiexec and various command line options for the install to configure connection broker server, port, and user login. Installing things from command line is a giant time saver, espectially when used in conjunction with a tool like psexec. The mind-numbing click-fest that is software installation can be avoided, and kicked off after hours to wrap up a deployment with minimal effort.

Factoring into the decision process for how to make H2T a reality is the current effort to migrate remaining XP desktops to Windows 7 by April of 2014. Up til now, I have been rebuilding those PCs by hand, one at a time. With ThinKiosk, I have the option of leaving those machines with XP on them til a later time, but still enable the Windows 7 experience with RDP shortcuts. I will still need to put Windows 7 (ThinPC) on them, but in the mean time I can put the interface on these PCs that they will still be using even after an OS upgrade on the client. This is a consolidation of effort that benefits 2 projects.

I have already touched on this, but separating the production desktop experience from the end-point machine makes it possible to increase resource access for all staff. Their individual desktop experience is no longer tied to one machine. If one end-point is down, they can pick up where they left off on another one. Guests can make use of end-points for internet access without compromising the agency infrastructure. ThinKiosk really simplifies the process of making staff desktops more secure, but paradoxically more universally accessible.

So far I am really impressed with the potential advancements I can make with our infrastructure by using ThinKiosk for our end-point management. More as it develops.

Friday, October 4, 2013

Shift Happens Phase 2: Bye bye, ESXi!

Part of staying nimble enough as an IT survivalist is learning how to use the right tool for the right job, and not get hung up on vendor or environment loyalties when it requires extraordinary measures to make a solution fit. 

In my case, this week, it's been assessing ESXi as a hypervisor for the H2T project, and discovering during the process, a few hurdles developed. Since we are using it as the hypvervisor for our admin servers, and I am already familiar with the management aspect. I googled around for awhile to get a sense of what VDI/RDP options would fit into the equation. And I learned:

  • ESXi 5.5, which I wanted to give a run since it is the next iteration of what we've installed, is crippled in a significant way: it's limited to a 60 day trial that requires vSphere to enable the vCenter fat client. A discussion of 5.5 here.

    In general ESXi presented other challenges in the context of this project:
    • We don't have enough host licenses to deploy after POC
    • It's a pain in the ass to interact/mount VMFS volume for faster data transfer in some cases (not possible with Windows except via SFTP to the datastore on host). This would seriously hamper efforts to migrade Windows machines to VDI environment using TIB images to convert the machine directly on the host.
    • It didn't recognize my RAID adapter on install
    • The host doesn't image with Acronis
  • ThinLinc server:
    • Not hard to install on Ubuntu, but configuration to use with Remote Desktop services was not as clear as I'd hoped
    • ThinLinc was only the gateway piece, I still expected some struggle and learning curve on the Windows server side of things
  • Windows Server 2012 installed on bare metal:
    • We have enough licences to get through POC, beta, and phase 1 rollout
    • Is very simple to mount a VHD, attach it, and transfer a large file into it for immediate use by a virtual machine
    • It recognized my POC box's RAID adapter
    • The host will image with Acronis
    • Has a number of other excellent features not available in the free version of ESXi.
My reluctance to start off with Windows Hyper-V was based on anecdotal experiences regarding the version on Server 2008. Server 2012 seems to have removed those challenges and so far has been a dream to work with. I was able to install the OS and enable Hyper-V, build a VM with an Acronis CD image and the agency Windows 7x64 TIB image, all in about 2 hours.

Some initial testing with Hyper-V has shown promise. I configured a Win7 VM with 2GB memory, 50 GB disk space, and 1 CPU. The resources have been throttled to a max of 16%, which means on the test box at that threshold I could run 5 VMs. I would like to have a density of 10 VMs per host minimum (with 2 concurrent users per guest), but that is a fairly arbitrary number and needs further exploration.

I tested responsiveness from a local RD session as well as remote (RD into Helpdesk server, out to a machine in Richmond, and back into the test VM at home. I connected to webmail and ran a Youtube video. It was pretty snappy, considering the remote client config and the lower bandwidth of that remote site. In the host monitor consol, the VM never used more than the configured 16%.

I will add a few more VMs and tune the upper resource limit to see where the connection slows down.

Bottom line is that I feel like I am making greater progress with Server 2012 Hyper-V than I was with ESXi. ESXi will continue to host our servers, but for the H2T project I need a host platform with a more familiar environment. So Hyper-V it is.

For now.

Tuesday, October 1, 2013

Free automated ESXi v5 VM backups for those of us on the FREE Edition!

10-01-2013 - Was just reminded of a low budget backup alternative I cobbled together while reading and responding to a post over on the IntelliAdmin website, so decided to post the mind-numbingly geeky details of the mechanism here in case I forget where else I might have put them.

All of our agency server instances run as guest OSes on an ESXi host. I have backup jobs scheduled from one of those servers to take care of nightly production data archives, but no automated mechanism for backing up the OS volumes from the host datastore itself. I briefly checked into Veeam's products, but we don't have an Essentials license for our hosts (or the cash for the automation upgrade in Veeam's full poduct), so it was a non-starter. Hot backups would have been nice for VM archiving, but if a window for offline archiving exists then this tool is good for that situation.

I figured out how to perform a command-line sync of folders on the ESXi host’s datastores using a batch script that will THEORETICALLY execute Windows scheduled task (run on scheduled or manually from any host on the same subnet) that:

  • runs a batch file that connects PuTTY (free, portable install on server share) via SSH to the ESXi SSH server (which must be enabled and set to run on host startup)
  • (will EVENTUALLY, after more testing) issue a shutdown command to the VM (no vSphere client or target VM console connection required)
  • runs WinSCP (also free, and portable install on server share) via a command script and kicks off a synchronization between an ESXi datastore folder containing your (shut down/powered off) VM and a local or network folder
  • (will EVENTUALLY, after more testing) issue a startup command to the VM 
This SEEMS to work elegantly, and as stated, the job can be run from any host on the network logged on as the designated agency backup user, from portable versions of PuTTY and WinSCP, unattended.

The script to connect the SSH session must be run first, and looks like this:

\\serverpath\putty.exe -ssh [ESXiuser]@[serverIP] -pw [ESXipw]

Running this the first time on a machine prompts for SSL cert confirmation.

The batch script then goes on to spawn a WinSCP sync session:

\\serverpath\WinSCPPE.exe /console /script=\\serverpath\[winscpbacup script].txt

In [winscpbackup script].txt we have:

open sftp://[ESXiuser]:[ESXipw]@[serverIP]
cd /vmfs/volumes/[targetbackupdatastore ID]/[target folder]
lcd \\[backupserver path]\
option transfer binary

synchronize local

To find the value for [targetbackupdatastore ID], you will need to connect once to the ESXi host with the WinSCP GUI to browse to the datastore folders (from root >vmfs/volumes).

This backup will only run to completion if the target VM files are not locked, requiring powering off the guest OS. This can be done from command line in PuTTY as follows:

For command line shutdown (logged into host using ssh putty)... 

vim-cmd vmsvc/getallvms 

To get the current state of a virtual machine: 

vim-cmd vmsvc/power.getstate <vmid> 

Shutdown the virtual machine using the VMID listed in the first column of output from Step 2 and run: 

vim-cmd vmsvc/power.shutdown <vmid> 

Note: If the virtual machine fails to shut down, use this command: 

vim-cmd vmsvc/power.off <vmid> 

Once backup has completed, powering on the machines from command line is as follows:

Check the power state of the virtual machine with the command: 

vim-cmd vmsvc/power.getstate <vmid> 

Power-on the virtual machine with the command: 


vim-cmd vmsvc/power.on <vmid>

This is a wholesale sync of all data in the datastore folders of each VM. In order to reduce redundant data backup managed by other more configurable, finer grained backup tools, it is good practice to put the data drive instance of a server in a separate folder from the OS partition. If that is not possible, the WinSCP script might be enhanced using file type wildcards.

DISCLAIMER: While I have stepped through these procedures manually, I have not yet tried this mechanism as a scheduled task run without my eyeballs on it. I am not sure how the system will handle the putty and winscp instances in non-desktop mode. Will be testing this before the end of the month.

This post is referenced in the Contra Costa ARC Data Backup Summary [internal link].

Monday, September 30, 2013

Shift Happens: Meet Tootie... the H is silent!

Q1Q42014 - Ubiquity Network
Virtual Desktop Deployment
Formerly H2T ("Here To There")


Project Goal: reduce the number of standalone PC client OS installs through the use of Remote Desktop Server and low-load virtual distributed desktop architecture.

Primary Benefits: reduction of dollar and time cost of standalone OS in maintenance and management ... extended deployment of existing client workstations with insufficient resources to run post-Win7 OS.
---

The Pitch:

With the challenging times facing our agency, we are having to ask staff to take on more, be many places and play many roles at once. Increasingly with the mobility requirements now facing these staff members, access to the information they need and the tools to process it must be "unlocalized" from their standpoint. In other words, their data and apps need to follow them, to be called up and look the same no matter where they are being accessed.


Enabling access to your desktop environment from a consistent interface on any internet-connected computer - independent of the OS, the location, or the network - is the intended outcome of this project.

This organization will continue the evaluation and promotion of remote desktop environments  - which it has already begun through the introduction of Microsoft Remote Desktop (Q2 2013) - for staff who need access to their work materials and tools from wherever they are. Previous to that it was LogMeIn (2008-2013), and so the remote desktop concept has been exercised for several years here in one form or another. 

The H2T - "Here To There" Ubiquity project is evolving out of that initiative. The goal is maximizing the value of our updated data service and reliability; and for desktops, not just extending the usable life of the current PCs in agency inventory, but also providing an extended access from non-agency PCs to make calling up "your" desktop a simple process of typing in a short web address (eg "mystuff.arcofcc.org") and using your email user and password to log in to your familiar Windows 7 workspace as you left it from the last time you logged in. Your stuff and your space, any place.

This is foundational infrastructure for a "secured anywhere desktop" initiative to enhance staff access to agency-critical computer resources, leverage existing hardware installations, enhance data security, and reduce administrative overhead inherent in Windows user/data security and management workload.

---

Thoughts and jots ... Updated 10-10-2013

I will be creating a proof of concept test environment with ESXi v 5.5 for a new client topology utilizing Cendio's Thinlinc (free 10 license pack) Windows Server 2012 and Hyper-V to centralize and virtualize a Windows 7 client experience at any internet-connected PC, including HTML5-capable web browser access.

POC Deliverable:

  • "Anywhere" Windows 7 desktop access with environment and data access spawned based on user, role, department, and organization variables submitted securely.
  • More IT security with integrated data access rules and centralized profile controls
  • Less effort to maintain standalone Windows 7 desktop installations as they are converted to Thinlinc native Windows 7 ThinPC "terminals".
  • The same access to unique applications (Boardmaker, CSS Databases) that require CD (or floppy!) media to function, as well as flash media, made available at the station they are using to log in.
---

Observation 09-30-2013: Really, after slogging through the desktop upgrades of XP machines, I think I would like to make this the last time I have to do bulk by-hand operating system upgrade for this agency. I will find out quickly what admin overhead this could either increase or reduce. 

I will explore: 
  • running the ThinLinc/RDP connections via native Win7 RD clients OR HTML5-enabled browsers on existing desktops
  • installing the ThinLinc "thin7" client on current Windows 7 installs
  • booting ThinLinc Client Operating System (TLCOS) Windows 7 ThinPC from a VHD on a few current Windows 7 machines and bridge the gap between OSes for awhile
  • building bare metal single OS (thin7) Here-To-There client "H2T" 
  • introducing non-MS desktop environments (Mint Linux?)
  • deploying TLCOS on Raspberry Pi hardware
  • deploying Linux OR Android clients on microPCs (MK802IV SE)
  • Alternatives to Active Directory that would play nice with Google Apps user management APIs
  • Alternatives to VNC for remote control of user session
    • 10-01: discovered how to use RD remote control and where to change permissions on users to allow viewing for server 2008).
    • Also found IntelliAmdin's Remote Control product, which has the added benefit of being able to choose among logged on users for both server 2008 AND WINDOWS 7!!! Much more stable than EchoVNC.
  • calling it Tootie (because the H is silent)
  • update 01.29.2014 - decided to rename this whole thing Ubiquity, well because it sounds cooler and has more meaning

The ThinLinc Windows Server 2012 HyperV server will be installed as a host OS on the ESXi host hardware (Intel i7 32GB), as well as an instance of our current Win7Prox64 image (as the virtual desktop host guest. I am hoping to set up the virtual desktop host (VDH) guest instances as NON-DOMAIN clients if it can be done. Taking on the Active Directory bull while attempting to shift the desktop paradigm might be biting off the unmanageable.

Much braining to do.

---

10-10-2013 - Having spent lots of time looking at platform options for both client and server, it looks like the most straightforward and cost effective way to go is Windows 7 ThinPC (or kiosk architecture such as thinkiosk) running on existing desktops (no less management needed; Software Assurance makes the OS "already paid for")., connecting to virtual machines running Windows 7 full desktop experience, with all the management features in place.

Friday, August 9, 2013

Managing Windows Imaging By Hand - Tales of Cloning


Recently I have been working on resolving The XP Conundrum for the agency before it's staring over my shoulder. This involves a lot of OS rebuilds and new machine deployment. I came across a vendor who offered a more automated, networked, remote-enabled method of managing these images nd their application. Although it was, as expected, way out of our budget to even implement, let alone maintain, I had a chance to hash out my current build and image management approach. 

I maintain 3 "gold" images: XP Pro 32-bit, Win 7 x86, and Win 7 x64. I have not used or updated the XP image in a long time, as any new machine I purchase now is quite capable of running the W7x64 OS. The W7x86 image is used on older machines being upgraded from XP at this point (upper end Pentium 4s with 1 GB of RAM). The product I use for applying those images is Acronis Backup & Recovery Workstation (v10) with Universal Restore. The Universal Restore feature allows the application of these images across disparate hardware vendors (I build all our machines from whatever DIY kits are on sale from Newegg). 

Generally, it takes about 20 minutes to apply an image, and another 15 to finish configuration at deployment time (not counting data transfer from old machines or printer setup). We don't have any configuration variance to speak of, which makes it less of a headache to manage from my end. Everyone uses pretty much the same software compliment, and any special apps they need they install themselves (except for a couple machines for accounting and payroll). 

On average, I am building or rebuilding one or two machines a month. With this push to retire XP, I am hitting a rate of about 5 machines a week til I'm done (about 70 machines at this point).

My situation is not especially involved in terms of machine configuration management, so doing builds "by hand" is not cumbersome. I am not running a domain, and the only unique configurations are in the form of staggered backup and app update schedules. Our only 2 enterprise server instances are 2 Windows Server 2008 machines; one for VPN and one for central backup repository. 

The burden of being the solo computer guy is balanced by the fact that I am in complete control of the process and decisions about how to execute a strategy. I do what I can to centralize my operations (using Batchpatch for centralized Windows Updates management, NiniteOne for core application updates, and psexec with batch scripts for everything else), but my decisions to implement any tool or framework is driven by both budget and a desire to "keep it simple". 

I have to measure the efficacy of implementing any new tool against the learning curve and effort to maintain any associated configurations and server components. I can't afford to dedicate too much time to any one piece of my job, because I am doing it all. You can get a sense of my scope if you visit my LinkedIn page here.

Saturday, December 1, 2012

The Quest For Hassle-Free VPN, Part 1: From There To Here

In the years since I started working for ARC, one of my seemingly unending quests has been to establish a virtual private networking infrastructure to manage the now-200+ PCs spread out over 11 sites. This has proved to be a major undertaking, given that all those sites have been on their own non-static-IP broadband services until earlier this year - not a total show-stopper with services such as DynDNS, but certainly added a potentially significant point of failure if one were to try and set up site-to-site VPN.

My quest for the perfect VPN option has had a few driving requirements:

  1. Someone else manages the server - Since I am a one person IT shop, I have always tried to steer clear of using infrastructure components that involved having to manage a server internally. That is a major reason I went with SAManage as my Incident/Asset management platform.
  2. No port forwarding needed - Again, if I had to open and manage ports on 11 routers, and change them every time I added or moved a client, it would be overly cumbersome. However, recently I have eased up on that for purposes of a few staff members using RDP to get to their work PCs from home.
  3. CHEAP - We're a non-profit. Ipso facto.
  4. Simple to set up and manage - Another derivative motivation based on the fact that I am doing this solo, I have tried to adhere to the KISS principle, in the event that someone else had to come in and take over if I were ever incapacitated.
  5. Works with RDP and Windows UNC - The initiative to set up a VPN is largely based on a need to establish an infrastructure management foundation. The two components I rely on most are remote access using VNC/RDP, and remote management using BatchPatch.
  6. Has an unobtrusive client presence (ideally runs as a service) - I don't want the system tray drawing attention to itself. Ideally, from the end user perspective, I want no evidence that there is a VPN connection on the PC at all. The fewer things people can click on, accidentally or otherwise, the better.
In my search for this ideal solution I have tried:
  1. Hamachi (the early years before LogMeIn bought it) - This was a great option for small networks, but at the time I was trying to steer clear of having to pay.
  2. Neorouter (April 2010 to February 2012) - This was a great service - the first one I thought was stable and robust enough to make it worth the time to install on every agency PC and manage with an internal server. And it was FREE! I used it for almost 2 years, and then they changed some aspect of the networking protocol that caused it to stop working with older versions of the client (which did not automatically update themselves or allow for a remote command line update). I also discovered during an attempt to move the server component that it was far from straightforward to migrate. There were several days of agonizing over how I would remediate this as I struggled to repair the service, but I finally resigned myself to abandoning it and looking for another option.
  3. Comodo Unite (formerly EasyVPN) (5 minutes in February 2012) - It was free, it worked much like Hamachi, and seemed easy enough to install. However, I became suspicious of the very fact that it was free, and my suspicions were validated when I ran into a snag and tried to get some support on it. Several emails, no response. An enterprise with a free product has no motivation to direct support resources on that product. Moving on.
  4. Hamachi (February to November, 2012) - Having become frustrated with dead ends, and knowing of LogMeIn's acquisition of Hamachi (but being wary of earlier price points for their services), I decided one day to again check it out. I was ecstatic to discover that their annual subscription for the VPN service had become very reasonable. For $120 a year, I could have a 255 client network. Not only that, but they had a few options for topology, client distribution, and several ways to manage those end points via a very convenient web interface. SOLD! For awhile... until the fateful week of November 19th, 2012, when LogMeIn did a wholesale change of their IP space from 5.x.x.x to 25.x.x.x and effectively knocked a large number of very pissed off customers off line. My own network did not fare as bad as some, but I decided that I wasn't going to continue relying on a vendor who conducted themselves so unprofessionally.
And so the search continues.

I have looked for a minute or two at... 
  • OpenVPN - prohibitive per-client licensing, complicated configuration
  • SecurityKISS - OpenVPN variant with bandwidth volume-based pricing, but open-ended potential for high bandwidth consumption
  • FreeLAN - prohibitively tedious SSL CA certificate distribution requirement
  • LAN  Bridger - just didn't work as expected
None of them really met the criteria of easy management and cost effectiveness, and frankly at this point my faith in the stability or reliability of a vendor's infrastructure is seriously bruised.

Having exhausted my search, and having burned literally hundreds of hours researching, testing, deploying, and re-deploying clients, I came to the conclusion that it was time to revisit the option of rolling my own in-house VPN solution. I decided it was time to give Windows Server RRAS a go, in spite of the potentially steep learning curve involved.

And so begins my journey of blood, sweat, and tears... Coming Soon: Part 2 - Deploying an all-Microsoft VPN solution from scratch with no prior RRAS configuration or SSL deployment experience.

Wednesday, May 18, 2011

Perspective: Don't be a Nick (Burns)!

I thought before I launched into the geek speakiness and tech talkiness, I would do a psychospiritual level set and talk about one of the challenges I face as the "company computer guy".

One of my favorite recurring characters in the late 90s sketch line-up on SNL was Nick Burns (Your Company Computer Guy). Watching those sketches was hilarious, and painfully self-referential for me. See, there's a Nick Burns buried inside me, an archetypal character defect that I will probably never completely exorcise, no matter how many spiritual inventories I do. I would even venture to say that there's a little bit of Nick in all IT professionals, no matter how polished, humble, and self-effacing you might be. And, at the heart of that archetype is a lack of self-esteem that wields sarcasm as a blade to cut ourselves off from the potential of future rejections. When we dive into something with almost religious zeal, whether it's computers or video games or what have you, we become "geek". And when our geek gets deeply buried enough under arcane language, inside jokes, and minutia, we cut ourselves off from the general population. It can become a vicious cycle.

Wow, that went deeper than I anticipated. Sometimes I even surprise myself.

For the first 6 years(!) of my IT career, my geekitude both paid the bills and gave my Muse a medium, as I worked in the Helpdesk doing phone support in a retail environment (and in my spare time, pushed pixels with a passion). I could go on and on about the challenges and frustrations of trying to troubleshoot computer problems when you can't see the screen of the person you are helping (this was pre-TeamViewer or Remote Desktop days), don't know as much as you'd like about the system you're trying to fix (old-style dumb terminals and HP mini-computers), and the person you are trying to help is under the gun to help the customer glaring at them on the other end of the phone. And by nature, I am not a patient person. It's amazing to me, looking back, that I did not get written up on a weekly basis for my attitude.

When I moved into desktop support, it was a little easier to temper my frustration because I was at peoples' desks, seeing the problem first-hand, and was almost always rewarded with genuine gratitude. That fed my soul. The Nick in me still came out, but in a way that people tolerated as "just Jack", not mean-spirited but that gruff, sarcastic character that was often tempered with kindness and humility when nobody else was looking.

Occasionally, my sarcasm even causes a chuckle, as I would send folks a link to the Let Me Google That For You site when they had a question I thought was incredibly easy to "just google", which they could have done themselves. In fact, I became so famous for that directive within my wife's family that her sister gave me a shirt one Christmas that had "just google it" emblazoned on the front so I could point to it whenever they asked m something they knew I would tell them to google.

Since then, I've tried to incorporate the philosophy that "knowledge does not equal virtue" and daily remind myself it is unfair to get attitude with someone because they don't necessarily know or remember tech procedures that I think should be fundamental. Virtue should be measured by what and how much we give of ourselves to others. Period.

When I started working at ARC, this attitude of patience and empathy was an even more important "coat to put on" every morning. In the corporate world, it is somewhat reasonable to expect people to have basic computer skills because it's an integral part of their job. In the non-profit sector, people consciously choose to get involved in order to contribute to the greater good, not necessarily to make a living or further their careers. So the expectation of basic computer knowledge is not appropriate. These folks at ARC are here to make a difference in the lives of those we serve, and it's my job - my MISSION - to make sure that their experience with their computers doesn't become a distraction from THEIR mission.

As I continue to define and refine the direction that I'd like to guide the development of the IT infrastructure here, I think it's central to what I do to remember what's important to the people I'm helping. I guess, in that light, calling this an IT "empire" is rather... Nick-ish of me. So from now on, I'm going to call it OUR IT Collective.

Cheers, and have a great day!

Tuesday, May 17, 2011

Building an IT "Empire" - Part 1: Managing those desktops!

When I first came on board here at Contra Costa ARC, there was no IT infrastructure to speak of. It was a mishmash of unconnected, unmanaged bits and pieces with little cohesion or consistency.

All the desktop computers were deployed without modification, either directly from Dell (complete with crapware galore), inherited from the county, or built by a system integrator and then equipped with Office 97 and all subsequent patches afterwards. The lone agency server (hosting the accounting and payroll systems) was running NT server on a Pentium 3 500mhz processor... with DAT tape backup (never really was sure if any of those tapes had uncorrupted data on them). The file server was an XP machine with a shared out folder. Email services were a conglomeration of personal accounts and a domain poorly and expensively hosted by Earthlink. It wasn't pretty, but it worked for them... mostly.

I can imagine that this scenario is probably more common than not for non-profits who cannot afford full-time IT staff. When I finally had a complete picture of the task before me, it was more than a little overwhelming, but I enjoy a challenge.

So where to begin? Well, I realized I needed to overhaul the desktop management process first, since that's the core component of the infrastructure here. I have since set about identifying and refining my toolset for this aspect of my job, and it's a big piece of the puzzle. My next several posts will catalog the details of my processes and the tools I use to manage the PCs here at CCARC. Below is a list of topics I will cover:

  • Setting minimum hardware specs, OS requirements and creating a software "package"
  • Purchasing PCs and software
    • Using TechSoup.org and the Microsoft Donation Program: Don't pay full price for that OS!!!
    • Dell: A consistent source of quality desktops and laptops for under $500
  • Managing workstation images
    • That first build: patched and up to date in one afternoon with Autopatcher!
    • Minimize driver hunting with Driverpacks.net
    • Batch software installs with ninite.com
    • Enable remote support with Echoware and a Remote Desktop tweak
    • Final custom touches with Group Policy Editor
    • Wrap it all up with Acronis True Image Workstation and Universal Restore
Before I had this process nailed down, it would take me a minimum of a day and a half to build a PC with Windows XP Professional SP2 from scratch, taking into account all the updates and patches that had to be downloaded. I didn't have a lot of experience with imaging PCs but I knew there were ways to streamline the process.

These days it takes me less than an hour to go from blank hard drive to deployable, managed XP or Win7 Pro desktop. Considering that I am building about 10 PCs a month, that's 11 times faster than it would be without these techniques. When you're running the show solo, that kind of time savings is critical.

In my next post, I will talk about how I decided on minimum desktop hardware specs, what OS to use, and which software I deploy on agency PCs.