Articles and Blog
December 14, 2014
There is an underlying assumption to most technology models that all strategies will be possible. If a given technology exists, it exists in all places and for all users and can therefore be deployed. So in the face of ubiquitous technological opportunity, all technology is available for planning and implementation, and minimum standards of efficiency and even regulation are possible. However, a plan which requires that this universality of technology be an inevitable element fails the first time that this standard is proven to not be attainable.
A couple of examples should suffice. One user once subscribed to Amazon Web Service. For about 18 hours all together. They went through a fantastic smorgasbord of cloud options Amazon offered, signing up for selected features and envisioning a welding together into an amazing powerful new model of computing power and off site backup. Then reality kicked in. The internet bandwidth regionally, while technically high speed, was not sufficient to support continued robust connectivity required for AWS, nor was such a quality of service available (although it was certainly advertised). Scratch one technology model, compliments of the real world.
The same is or may be true of other technology models. Cell carriers with spotty coverage should not ethically (although they may in reality) offer a device or a service or aspects of service which assume constant or nearly constant connectivity. Chromebook performance was probably most impressive when tested in southern California, where you don't really need sunlight to get a tan, the wifi and cellular signal strength alone has to be enough to cook anyone medium rare. But it may not be the best place to test a technology which relies unequivocally on the availability and abundance of technology equal to that of test conditions for the model to be and remain successful.
This is especially true of mobile devices. By definition mobile devices are expected to move from place to place. Pick a U.S. mobile carrier at random, go to their website and browse their coverage map. Really zoom in and pan around. Think about how many contiguous miles are covered by areas of spotty coverage. Anyone doing business in the area or traversing the area on a regular basis cannot partake of the theoretical technology models no matter how advanced, or how impressive the advertising, for those models.
For a while I traveled through the area between Rockford, Illinois and Dixon, Illinois on a fairly regular basis. Based on where roads were physically located, regionally available carriers and signal strength, it is essentially a cell phone dead zone. Between one city and the other there was no cellular signal at all.
Like traveling through the desert, make sure that your car is in good shape and the spare tire has air because there is no help or way of calling for it for the next 50 by 40 mile block. Therefore GPS would work only if you had a map program which preloaded its map data. If reliant on the cell network for data, that feature also does not work. Nor does 911, AAA or calling your boss if you are running late. The model fails.
Soon this will be true of cars as well. One aspect of the Internet of Things (IoT) that technology writers love so much to tout is the connected car. Remember the Google self driving cars? They look very cool on the websites, all of the technicians standing around them in matching polo shirts and clipboards kind of brings a tear to the eye as a dream is realized and civilization takes that next leap forward. In practice the smart car probably won't be so smart after all whenever it drives out of coverage range. Will these smart cars, now dumb cars, be sold where there is not the infrastructure to service them? Absolutely. Should they be in a properly ethical environment? Probably not, at least not without a lot of disclosure.
In fact what will happen is this. Smart cars will be sold where there is not a chance in the world that infrastructure exists to let these cars be smart and companies doing the selling will hide behind what may be called the helpless peon syndrome, to wit, the companies which cannot service their products will staff the front lines of customer service with people neither empowered nor possessing sufficient technical knowledge to address customer complaints. (Nor in fact is technological education actually relevant in a scenario in which the infrastructure simply doesn't exist to provide the promised service.)
There was a televised news segment from the American South not too long ago. People who has businesses in the small town in question had what could charitably be called spotty internet service. The individual merchants had come up with a variety of workarounds even as they were all but crying with frustration that the only high speed provider was completely indifferent to the quality of service issues they experienced. And the frustration was deserved: customers were turning away, and actual measurable business was lost.
So the merchants had a calling network whereby they would call one another if they discovered that the internet was back up first; they had pre-printed signs they periodically hung in their windows that they could not process card payments for the time; they had the wiring strung up beside the cash register so they could lean over and disconnect their business phone(!) and plug in the card payment line. Into this brave new world the internet provider did not dash to fix the problem. Instead customers got empathy statements from unempowered peons in a deficient coverage model.
All of this is just to observe that sometimes, more often than may be thought, technology models are encumbered by lack of infrastructure, human nature, greed, indifference. These qualities don't appear anywhere on a Gantt chart when a system model is envisioned, but perhaps they need to have a place and value even as an intangible. Call it the anti-goodwill.
Banging the Rocks Together: A Life Skill for when the Internet fails
November 14, 2014
“Broadcasting around the galaxy, around the clock...we'll be saying a big hello to all intelligent life forms everywhere...and to everyone else out there, the secret is to bang the rocks together, guys.” -- Hitchhiker's Guide to the Galaxy
The Israeli Homeland Security website addresses the security (or lack thereof) of the Internet of Things in an article dated November 12, 2014. The thinking in this article correctly notes that all of the many current and future components of modern life which send information to and receive information from the Internet are vulnerable to attack. IHLS also observes (correctly) a paradox: Systems must be simple enough to secure, but require complexity for the current future application in the Internet of Things.
The problem is that this very paradox needs to be addressed realistically. IHLS insists that components critical to infrastructure be “completely clean, uncontaminated” but flexible enough to meet current future demands. This sounds rather like a middle manager banging his fist on his desk and yelling to just do something without understanding the system realities. It sounds like Dilbert. It probably looks great on a planning report, though. Let the legislators talk about a system which is secure and uncontaminated and flexible. They don't know what a realistic design parameter is anyway.
The IHLS theoretical system has the specifications that it is
- flexible and upgrade capable (that is, modular)
- minimalist (that is, simple enough to keep clean and protected)
- and let's add singular (that is, there is only one clean uncontaminated attack vector to defend)
What you have effectively designed is a system, the successful attack of which, will bring down an entire swath of infrastructure. Further, by limiting the attack vectors in such a system, you have virtually guaranteed that the limited vectors will be researched exhaustively by attackers. In information security (infosec) there is a truism that defense is always playing behind offense. In other words attackers always have the initiative, defense is always reactive. Putting all of your eggs in one basket, all of your faith in one component of a system, and a system with unrealistic requirements in the first place, virtually guarantees an eventual successful attack on infrastructure.
The better answer is dynamic redundancy with multiple and varied components to protect each critical infrastructure system and an infosec team to maintain it against the inevitable attacks. Then when the inevitable attacks impact one part of the system, there are redundancies to maintain infrastructure while the effects of the attack are repaired. Redundancy should not be confused with minimalist design parameters. Minimalism, that is, minimal system components are more desirable than complexity when the same or similar benefit results, and that should not be seen to conflict with the concept of redundancy. Unfortunately such a system will probably not happen for a couple of reasons.
First, non-technical (including legislators) people do not really want to hear that threats to a system are ongoing, and will continue into the indefinite future. They want to hear that a problem is resolved, not that it can never be; by contrast the IHLS proposal sounds more sexy.
Second, the cost of redundancy is not as easy to explain when the redundancies are guaranteeing a system rather than actually being responsible for its real time operation. Non-technical people (including legislators) only truly appreciate that a redundancy is necessary when it's not there.
Non-technical people (including legislators) do not want to hear about the details. They want the present and future benefits of systems, to lay out their requirements to systems designers while not understanding that their requirements are unrealistic, in some cases bordering on fantasy. Non-technical leadership may not want to hear the details, but the devil is in the details.
So I was outside for awhile today banging some rocks together in practice for the apocalypse this sort of thinking inevitably portends for a society reliant on Internet based infrastructure. It seemed more useful than banging a fist on a desk and shouting for an unrealistic infosec model.
 Lachman, Dov. Protecting Internet of Things from malicious attacks. Israel's Homeland Security Home. November 12, 2014. http://i-hls.com/2014/11/protecting-iot-malicious-attacks/
Why a Browser Blacklist?
November 11, 2014
I have a browser extension for Firefox and its full service big brother Seamonkey which permits me to block certain URLs or domains. Some reasons that people use browser blacklists are to block
a) pornography or other “objectionable materials”,
b) phishing or other sites with bad security reputations, or
c) sites which interfere with productivity, such as kitten videos or online games.
These are fine reasons to block sites, and I understand them. However, I did not begin using a browser blacklist for any of the above reasons.
I began using a blacklist because of the advertising and statistics servers which all too often hang my browser. Web sites track their popularity, determine advertising rates and use geolocation services to serve 'locally relevant advertising'. Yet, at the same time, a news site's specialty is news, and entertainment sites hope to entertain. Neither are experts at serving 'relevant advertising' or generating the statistics they crave. As a result they often use outside services to collect this data and serve advertising for them. It can be annoying, and I won't say that I like it, but I do understand the concept of advertising based revenue.
However, a line is crossed when these sites a) use advertising or statistics services which are so slow to respond that the browser hangs for a notable period of time, and b) so poorly craft their sites that the page hangs until the remote advertising or statistics server responds, however long that may be. Further, these third party advertising and statistics services do not just serve a single site, they provide multiple sites with their services. In theory they should have enough server capacity and bandwidth to provide this function in real time to all of their client sites, so that all client sites load seamlessly, in practice that does not always appear to be so.
In response, I use the following model to determine whether an advertising or statistics or 'other' domain makes it into my blacklist.
- I do not blacklist such a service simply 'because I can' block advertisers or data miners. Life is too short for that.
- I blacklist such a service when it slows down a web site enough to get my attention, AND
- the 'hang time' is long enough for me to become annoyed, bring up an electronic sticky note, note the domain (see graphic)
If these last two elements are true, I feel no more guilt about dropping them into my blacklist than a site owner, advertiser or data miner feels about hanging my browser.
I am currently testing Silent Block 1.2.3 for Seamonkey and Firefox, and it seems to make a notable difference in browser speed. I have not used it sufficiently long to make a meaningful overall assessment of the extension, but it does seem comprehensive and flexible.
As of this writing, domains which have slowed or hung my browser long enough for me to comfortably note them without hurrying and are therefore (in my opinion) worthy of a place in my blacklist are:
Your mileage may vary. Also worth noting is that some third party domains serve actual content, albeit with agonizing slowness, and may in fact provide elements of a client site which you may want to see. Thus a site may load with errors, load incompletely, or appear to be incorrectly formatted if you block third party domains which provide that content. A manual blacklist may be a useful tool, but which domains to add to it is a matter of trial and error. A Google search for a domain is often enough to indicate if it's a data miner, advertiser or actual content provider. In the end, a third party domain has to really slow me down (in my opinion, so this is entirely subjective) and probably more than one time, before I bother to blacklist it.
On the Butlerian Jihad
November 9, 2014
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” -Dune, 1965
This an interesting perspective on a couple of counts. Dune was a novel from 1965; computers as we know them today did not exist. Despite the lack of modern computers, it was assumed that man would abuse their thinking machines to the detriment of other men. Although Dune does not provide a lot of detail on what was called the Butlerian Jihad in the novel, it is presumed that some sort of social backlash against this abusive control by computers was anticipated by the author.
The year 1965 was before the personal computer, before Bill Gates said that 640K ought to be enough for anyone, before the rise and fall of the Blackberry, before Google stated that anyone with anything that they wished to keep private ought not to be doing that thing. It was before the birth, short life and quiet death of the concept of opt-out, both on a commercial and governmental level. 2001: A Space Odyssey was still a couple of years away, and IBM's Watson, while quick at data regurgitation, but strangely limited where relationships on multiple levels were concerned, was still 50 years in the future. Despite its time, Dune was prescient about where computers would eventually go, as directed by the worst nature of their human operators.
One concept which Dune suggests is that computers will be used to abuse others. Multiple examples are apparent in the information systems of today. Governments now analyze all data generated by their own citizens, the better to top up their dossiers. Spyware and viruses steal information from computers through stealth. Corporations collude to create a social atmosphere for information systems in which all user privacy is compromised and all user data is made available for use or abuse by any group for any purpose regardless of vendor.
Another concept that is suggested even in the limited writing in Dune is that a backlash against the overreach and control by computer systems will come to be necessary. This suggests a concept which has proven true throughout history: that given the option to continually develop an abusive system or practice to one's own benefit, even to the detriment of others, such development will continue to a crisis point. It is ultimately a social or political restriction rather than a logical or technological constraint which eventually limits or adjusts the concept being developed.
There are secondary, but no less valid elements of such a paradigm. This is precisely the sort of relationship a Watson would miss and may not be completely understood by either data vendors or users. When a model becomes mandatory or quasi-mandatory it is an indicator of several things. First, no matter how such a paradigm is portrayed, if it is imposed unilaterally by a vendor or government, it is probably not desired by those on whom it is imposed. Second, if the paradigm is applied equally or mostly equally among all vendors, or by one's government, there is not really an opt-out option to be had. Third, the suggestion that the only way to avoid such an abusive paradigm altogether is to not use technology is disingenuous at best, cynical deception at worst. These elements, taken together, suggest that to use technology one must tacitly accept abuse.
Another interesting element, again, historically demonstrated, is that when a situation reaches a crisis point, the remedy is destructive of both the elements which caused the paradigm to become a crisis and also the underlying structure which would have survived had the paradigm not been pushed too far. A peripheral, but again no less valid element, is that although this historical reality is apparent in hindsight, in the present it is difficult (but not impossible) for the participants to say for certain when things have been pushed too far, and a crisis is imminent.
From World War II until the mid-1970s it was permissible to smoke anywhere. People could and did smoke inside hospital rooms, and Big Tobacco was one of the bluest of blue chip stocks. Times change and paradigms shift, and by the very nature of the concept the outcome will be unpredictable. Today, in certain cities you cannot smoke out of doors in many places, while cars still drive along spitting out orders of magnitude more pollutants than any smoker ever could. All of which is to say that a paradigm shift is not predictable in the details, or, as Michael Chrichton wrote, a paradigm shift is like death: you cannot see how it will work out until you are on the other side.
Certain behaviors and reactions are currently apparent. Corporations hide behind legal theory and lawyers rather than address the fundamental issues which cause customer dissatisfaction with their practices. This indicates awareness of the problem, disinclination to address same and suggests that further development of the same model will continue in a similar direction. Likewise, governments hide behind national security arguments, and like corporations, ignore the underlying concerns while the model develops further.
Ignoring the real underlying concerns of users, a willingness to test the limitations of current models, assumption that the status quo of generalized abuse will evolve and continue indefinitely, and ignorance of history, whether intentional or otherwise will precipitate a crisis in the information age. If history is any example, the pattern will continue, and be pushed beyond the brink until the crisis unfolds. After that crisis, there will be no going back to even a portion of the model which is rejected. Similarly, if history is any example, it will be impossible to make most people in any given present believe this until a crisis is inevitable.
November 14, 2014
I wrote on the Butlerian Jihad a day before U.S. Senator Ted Cruz tweeted on Net Neutrality in what can only be most charitably read as amazingly uninformed about what Net Neutrality actually is. The best response to Senator Cruz and summary explanation of Net Neutrality I have seen comes courtesy of The Oatmeal. See the Senator's tweet and The Oatmeal's response immortalized online (Warning: the language is PG-13 if that offends you).
Net Neutrality in summary is a good thing. An Internet without it is uncomfortable to conceive. The Internet would not collapse without it, and information would still be available, it would just be more difficult to get balanced news, open source software and have reasonable media choices. For example, in the current environment, in which Net Neutrality can be said to exist, the video about Obamacare's economist calling American voters stupid still took several days to make it to center and left of center news media; open source software is normally donor funded and can't compete financially with a Microsoft, Apple or Google; Comcast already has shown with Netflix how choice of media could be restricted and prices raised arbitrarily.
Users would work around a lack of Net Neutrality, some more effectively than others, but most of them would definitely be unhappy about the new, skewed Internet. I am torn about the reality of an internet sans neutrality, and what it means for the Information Age in the long term. On the one hand, I am selfish; I want my balanced news, open source software, and media choices.
On the other hand, the current cyber environment has many problems of which Net Neutrality is but one. Even if Net Neutrality becomes the regulation of the land, there are still these other crucial concerns which the debate over Net Neutrality does not address. There are still concerns with corporate concepts of individual data privacy, national security, ever evolving cybercrime. None of these issues would be addressed by regulation in favor of Net Neutrality.
As I said above, historically humans have a tendency, in fact can be almost guaranteed, to push situations too far when things are going their way until a crisis point is reached. There is no reason the expect that an Internet without Net Neutrality should be any different. If Net Neutrality is defeated, one can expect higher prices, less choice, and countless models to build on and monetize the fact that users can be made to pay more for certain types of content or content from specific vendors. This will in turn result in a vast unhappy user base, lawsuits, uncertainty, and companies paying lip service to consumers but little else. This in turn might push the inevitable cyber crisis that much closer.
And that may be more beneficial in the long run than Net Neutrality.
 The Oatmeal. Dear Senator Ted Cruz, I'm going to explain to you how Net Neutrality ACTUALLY works. November 10, 2014. http://theoatmeal.com/blog/net_neutrality
Google and Chrome, Linux and Chromium, Firefox and Flash Player
October 31, 2014
Many Adobe Flash based videos and games will not operate properly in the Firefox browser for Linux any longer. This is due to Adobe's decision to no longer support the Linux operating system with a direct download browser plugin for Adobe Flash player.  Instead, Adobe is providing a Flash plugin called Pepper and is making it available only in the Google Chrome browser.
However there is a problem with this approach, and that problem is Google. As many users have noted, Google, for some inexplicable reason decided to not support CentOS/Red Hat/Scientific Linux with their recent version of the Chrome browser. In itself this is not a problem since Linux offers the Chromium browser for the Chrome fans out there, and no doubt the Linux community will eventually develop a Flash plugin of their own for all browsers. However, for the time being, the problems a Linux user must resolve to have a browser with updated Flash capability are these:
- Adobe does not offer a recently updated Flash player browser plugin for Linux, except as packaged in Google Chrome,
- Google has snubbed or ignored several of the major Linux distributions in the latest version of Chrome,
- Google does not currently offer previous versions of Chrome for download.
Leaving aside the privacy issues inherent in running a Google based browser, the reality is that some people may want their Flash based games or to be able to view all Flash based content so badly that they are willing to essentially waive their online privacy and use Google Chrome in order to have Flash capability. I have my doubts about the advisability of this course of action, however, for those users desperate for their Flash content, here are some simple steps to get the Pepper Flash plugin from Chrome installed to Chromium. (I installed Chromium and the Pepper Flash plugin in CentOS 6 32-bit edition.)
First download and install the Chromium browser. If it is not available in your distribution natively, you can get it at http://people.centos.org/hughesjr/chromium/6/
Next download and save (do NOT install) the latest Google Chrome RPM installer available at http://www.google.com/chrome/
Now open the Google Chrome installer RPM with an archive manager. In other words, do not run the installer with Yum or Package Manager, instead open the RPM to browse its contents.
Next extract the folder /./opt/google/chrome/PepperFlash/ from the Google Chrome installer. It is generally a good idea to keep the folder name for clarity. So, you may save the extracted folder and contents as ~/PepperFlash/ or similar. If things went properly, you now have a folder called ~/PepperFlash/ or similar containing a file called libpepflashplayer.so. You can now close the Google Chrome installer RPM and delete it.
When you installed Chromium, Linux created a launcher shortcut. That shortcut launches Chromium with the command
Using our example, change that shortcut to read
/usr/bin/chromium-browser --ppapi-flash-path=~/PepperFlash/libpepflashplayer.so %U
Restart Chromium, and your Flash based content including games and videos should now be available.
That's it, you're done.
September 29, 2014
Information systems originated the concept of garbage in, garbage out with that concept meaning that at the design phase of a computer system proper attention to the accuracy of information as well as the programming logic were necessary. This was not as obvious as it would seem on the surface, but nonetheless unavoidable. The cleanliness of programming logic was not in itself useful if the assumptions made about the data were inaccurate; similarly if the processing of fundamentally accurate data was incorrectly weighted by the programming code, the quality of the resulting information was suspect. Therefore neither the input data nor processing assumptions could be incorrect, and to the degree that they were (garbage in) the results were assumed to be flawed (garbage out).
But the concept of GIGO is in itself limited, and perhaps limited in a crucial area. GIGO makes the assumption that there is an interface singularity; an input phase; a stage at which an information system is tested as accurate with regard to data and processing assumptions, after which, garbage in having been protected against, garbage out will not occur. Information systems project managers know, on the other hand, that it is necessary to update a system more or less constantly, and in fact as soon as one cycle of systems development ends the efficient long term project essentially begins again. However, this is a long term development cycle. It fundamentally conflicts with a culture of the 140 character tweet, the 160 character text message, and the concept of immediate gratification.
This distinction is especially telling when one is attempting to understand and predict human behavior. Human behavior is in fact more like weather prediction than a straightforward, complete analysis. At one time it was assumed that given sufficient computing power to assess the variables, long range accurate weather prediction was possible. In fact, the variables were so many and incompletely understood, both in scope and impact, that weather prediction on the scale anticipated, ultimately failed.
It may be theorized that as human intelligence deteriorates in the face of a culture where a complete communication is contained in 140 or 160 characters, it logically follows that prediction of human thought will become more possible and precise. In fact, with fewer variables (less intelligence on the part of the subject, or ability to focus on minutiae) prediction will likely become more probable. However, the standard of probable makes predicting human behavior ultimately no more accurate than long range weather prediction.
In addition, like weather prediction, once one improperly quantified variable deviates from the prediction, all data based on that variable becomes inaccurate to some degree, further analysis yields not only increasingly inaccurate results, but also further inaccurate input, and the model inevitably skews to the point that the computer model bears no real resemblance to actual results. In other words as garbage in become an inevitability, so does garbage out become equally inevitable.
Having said as much, it must also be noted that complete, accurate predictability of either weather or human behavior may be seen as a philosophical aspiration but that that unattainable aspiration does not render the quasi-accurate prediction meaningless. Even though weather prediction cannot be made accurately into the indefinite future, and many predictions are grossly wrong, a weather forecast is still a generally useful tool, in context, and will full regard to its limitations. Possibly, and in fact probably, MIS or CRM systems which attempt to divine human behavior, motivations and reactions are doomed to hit the same point of inevitable deviation. Such models may be assumed to have the same conceptual degree of accuracy or inaccuracy, value and limits as a weather forecast. Similarly, such models may be seen to be generally useful, but neither all knowing nor completely reliable, and in fact subject to the occasional gross inaccuracy, and requiring constant reassessment.
Therefore, as with weather prediction, listen to the forecast, but like the old timer whose knee twinges when it's going to rain, the twinge may be no less useful a predictor. Thus management instinct may challenge the the best packaged MIS or CRM systems in terms of predictive ability.
To Kill a Mockingbird, Once and Only Once
September 19, 2014
Question: How is a rock and roll song like a great novel?
Answer: When it's a one hit wonder, it's still a hit.
Harper Lee, Bram Stoker, Mary Shelley, Margaret Mitchell. One hit wonders, all. That one time, that one magical time, they got all of the way under the ball and hit it out onto Ashland Avenue. But, when you manage, through brilliance, skill, luck, the beneficence of God or the universe or the Great Spirit, or what you will, to get not a piece of it, or a slice of it, but to get all of the way under the ball that one glorious time and to smack it completely out of the park, what you do not do, what you must not do, is to run out onto Ashland Avenue and try to hit the ball a little further. It's out of the park. It's gone. Na, na, hey, hey, kiss it goodbye.
Harper Lee rarely spoke of Mockingbird. True, she wrote to editors regarding the proposed censorship of Mockingbird by small minded school districts of her time. But her commentary on Mockingbird itself was limited, mainly consisting of the observation that the story was now told, that there was no more of that story to tell, and that any further attempt along that line would be an inferior rehash. In other words, na, na, hey, hey, kiss it goodbye.
It is surprisingly difficult for me to write on this topic, although I feel so strongly about it, simply because I understand the concept so intuitively and completely. It is, to me, so obvious a point as to be pointless to belabor it. It should not need to be said. To Kill a Mockingbird, Dracula, Frankenstein, Gone with the Wind. Their stories were told. They were not told well, they were told surpassing well, they were told superbly. So, na, na, hey, hey, kiss it goodbye.
In an age of sequels, prequels, and we-cannot-think-up-new-ideas-so-how-about-a-rehash-quels, in an age where we do remakes of existing stories rather than demand creative and original content, in an age in which some movie studio genius decides that three or five sequel movies maximizes ROI (and is right in that assessment!), I cannot help but appreciate someone who knows how simply to STOP telling a story when it is finished. To borrow from Pat Conroy, these stories have entered the bright and dazzling city of memory.
In that bright and dazzling city of memory, they will dwell, and there I will visit them occasionally. When I visit them there, they will bring me joy all over again. But their stories are told. Their stories are complete. If those stories expand over time, it is not the stories which have changed, it is I who have changed, and can more fully appreciate their tale.
So to Harper Lee, Bram Stoker, Mary Shelley, Margaret Mitchell, and all of the other one hit wonders who told a tale which changed me, thank you. If that one time was all that you had in you, what does that matter? That one time was enough. Na, na, hey, hey, kiss it goodbye.
May 12, 2014
Reading a news item on California's proposed mandatory kill switch for stolen mobile phones, one link led to another and I ended up at the The Wireless Association website, more commonly known as CTIA. Now, CTIA's site has a lot of good advice on securing your phone. I'm a big fan of password protecting phones, backing up the data, encryption and the like. Those are all good practices, and people should apply them.
CTIA describes itself as “an international nonprofit membership organization that has represented the wireless communications industry since 1984. Membership in the association includes wireless carriers and their suppliers, as well as providers and manufacturers of wireless data services and products.”  In other words, this is a group which represents the mobile industry, which is in no way the same thing as representing consumers.
CTIA is generally opposed to a universal, irreversible kill switch for mobile devices. Their argument goes that a hacker could disable multiple phones with specially crafted SMS or other attacks, leading to the mobile equivalent of a DDoS attack. In the case of this single scenario, this one approach to mobile phone theft, they are correct. Such a kill switch could and most certainly would be abused. I would also add to hackers, abusive spouses, stalkers and other miscellaneous debased persons who would no doubt abuse such technology on an individualized basis.
In response to such a kill switch, CTIA suggests a kill switch app which would be reversible, so would give a reversible ability to the consumer to prevent their phone from being used on a mobile network. This sounds like a decent compromise on the surface, but it has some problems if it's the only mechanism offered to address the problem. First, it applies to mobile devices. By definition, these devices are moving from place to place with their owners. Yet consumers who would implement their kill switch app in the event of a theft or loss of a device, must have the internet available to invoke it, problematic since their immediate connection to the internet has just been lost or stolen (and in some cases, consumers cannot afford to maintain a second way to get online at all). Additionally, a kill switch app which is reversible suffers the same danger of becoming a tool of hacking and harassment as the irreversible version. Rogue SMS, abusive spouses, stalkers and the like could still use it effectively.
Where I differ from the CTIA's perspective is in the available options. CTIA seems to suggest that there are three major options: consumers using best practices (a great idea) or a universal, irreversible kill switch (which is problematic), or a kill switch app (equally problematic). From the perspective of a group which represents the mobile industry, this may be reasonable. After all, what these practices all have in common is this one simple element: They require almost no cooperation on the part of mobile providers. The effective limit of mobile providers' responsibility is essentially to request that mobile device manufacturers include a specific app in the pre-installed software they load onto their devices. That's about it.
A reality which the CTIA's limited viewpoint ignores is this: Mobile providers have been able to track the multiple serial numbers of a phone which accesses its services for years, for the most part.
Suppose that you were to call your mobile carrier and report your phone stolen, and even to contest the cost of international calls made on that phone during the period when you thought your phone was lost and not actually being used by a thief. The mobile provider will tell you that you are responsible for all charges until the time that you reported the phone stolen, and that they, the mobile provider, can prove the validity of the charges specifically because, if push comes to shove, they can document that a specific handset or handset-and-SIM-card combination made the calls and incurred the disputed charges.
The mobile provider can document these charges because they track the various serial numbers of mobile equipment making calls on their network. So the mobile provider can and will tell you that your handset, identified by serial number (called an IMEI or MEID depending on the technology), and/or your SIM card (again, technology dependent, not all U. S. mobiles use SIM cards) made the contested calls. In most cases that information exists on the providers' records.
An industry database to block reported stolen devices would not be a perfect system. Stolen phones are sometimes resold in other countries. There are even knock off copies of major brand phones from cheap manufacturers which do not have an industry standard serial number programmed into them. So there are cases in which a stolen phone may be used and slip through the cracks in an imperfect system created and maintained by mobile providers. Nobody is claiming perfection for such a system, but any such gaps would be both limited and understandable.
I say that information exists in the providers' records in 'most cases' because by their nature mobile phones move about, roam on a partner's network, and even travel out of the country. There are different levels of age, infrastructure, investment and compatibility of systems among these various networks, and some records will not have all device information documented completely or compatibly.
Therefore an industry database of lost and stolen devices would not be a perfect system. However, if the average thief or opportunist knows that a lost or stolen phone cannot be reactivated short of a lot of luck, technological expertise or the ability to resell a stolen device overseas, incidence of mobile theft would plummet.
A reversible kill switch app designed to disable a stolen device makes the assumption that the lost or stolen device has not been wiped or reprogrammed by the thief or purchaser. Software is ultimately changeable, but a hard coded serial number is much less likely to be changed and is therefore a far more secure tool for device identification. Additionally, leaving the identification of the device in the hands of the people more able to use the minutiae of mobile technology (the providers) is more effective than expecting consumers of varying levels of technological sophistication to be able to disable a phone effectively.
Looking at the various options potentially available, while a reversible kill switch app is, or can be at the consumer's discretion, a valuable addition to a mobile phone, the one most effective common point of control is the common point which incorporates both information and minimum standards of expertise: The mobile phone providers alone have the information and access to create, maintain, and effectively use an equipment serial number database, still the most effective means to block a lost or stolen mobile device.
Now all that is really needed is for mobile providers to step up and be responsible.
 CTIA. About Us. Retrieved May, 12, 2014. http://www.ctia.org/about-us.
HOWTO: Automate temperature monitoring in CentOS Linux (a/k/a Build your own Stuxnet Day)
April 29, 2014
Part I - Argument
This last April 25th was the day that I built my own Stuxnet and burned out a power supply. Stuxnet was a virus which in effect caused the hardware (centrifuges) used in the Iranian nuclear program to run so fast or irregularly that they burned out. This was said to be directly responsible for slowing down Iran's nuclear development process. For those with an interest in infosec, this is an interesting concept with potential applications all over the real world.
Power stations have been a special point of contention as many of them are still using legacy equipment with little or no security layer, and still others use the default passwords on control systems which directly control physical equipment. Some people are astounded that this equipment is not systematically attacked, and others believe that China, North Korea or other rouge nation states are simply accumulating an ever expanding database of vulnerable equipment while waiting for the most opportune moment to take down vast amounts of enemy infrastructure at one time.
Part II - Built my own Stuxnet
As for my Stuxnet experience, the other day I was fiddling with the computer and I went into my BOINC settings. I had noted that the BOINC client I run in Linux was only running at 50% efficiency and decided to see what it was capable of. In fairness to the people at Berkeley, they do warn on their settings page that CPU allocation percentage can be reduced to reduce CPU heat. So I noted this, and adjusted the CPU percentage up, but I watched it.
I was thrilled to see that I reached > 2 GFlops, but after considering the potential for overheating, I lowered the percentage again half a day later. Too late. When I next used a physical component (several hours after lowering the CPU speed to previous levels, I opened the CD drive), I burned out the power supply. Bang! Down went the system. One new power supply later, I am back online (and running BOINC at 50% efficiency).
A couple of interesting points occur from this lesson:
- Even though I decided to see what my system was capable of, I also believed that I had built a more robust system than normal (since I have some extra goodies in my Linux box, I also have three extra cooling fans in a gaming configuration),
- I could run the air conditioner 24/7 to offset the extra heat, but that is not practical and the electric bill would go through the roof; capability does not equal practice,
- I was using a civilian system (BOINC). Not something (too) specialized or exotic, and not something that one would think would or could likely render a computer inoperable,
- A civilian system, if hacked, could be used to burn out hundreds or thousands of computers simply by tweaking this setting because not all systems have sensors or software capable of monitoring temperature spikes (along with my new power supply, my Linux box now has temperature sensors and software up and running),
- Even a system which can monitor itself needs to be further specialized to take specific action in the event of certain conditions. Anything less requires human interaction and monitoring,
- This box was offline for the time it took to get a new power supply ordered, shipped and installed. I have other ways of getting online and backups of key files. One hopes that companies which have critical systems have the wherewithal (vendor lists, technicians on call, individuals authorized to go to vendors and purchase parts, leadership hierarchies, transportation plans, failover systems, in other words, common components of risk management) in place for rapid system recovery. From previous experience, I somehow doubt that these plans go far enough or consider all scenarios.
So, in the aftermath of BYOSD, I decided that I wanted my Linux box to have temperature monitoring active and to act without human intervention in the event that system temperature went too high. Which led to:
Part III - HOWTO: Automate temperature monitoring in CentOS Linux
- I started with a a box running CentOS Linux 6, Gnome 2 and Python 2.6 with tkinter installed,
- Install lm_sensors. lm_sensors is the generic sensor monitoring service, a separate GUI to monitor lm_sensor data is required,
- Run sensors-detect.sh as superuser. You can find it at http://www.lm-sensors.org. This script will offer to detect the correct temperature probe(s) in your mobo (that's Geekish for the English word motherboard) and write the correct .conf file,
- Optionally install gkrellm, which has a kind of decent interface for many things including lm_sensors, but runs as an opened application, not a taskbar icon. It's not what I wanted, but it's cute enough to mention,
- Install gnome-applet-sensors. This may not be found in your CentOS packages. If not, search online for gnome-applet-sensors-2.2.7-1.el6.rf.x86_64.rpm or equivalent for your system. With gnome-applet-sensors you will be able to add a monitor to your taskbar for the temperature probe(s) in your mobo.
You should see something like the following on your taskbar now.
Well and good, you can now monitor temperature on your taskbar, and that may be enough for many users. But, if you want Linux to monitor things for you, and take action if things get a little too hot, let's continue:
- Edit /etc/sudoers to give sudo permission to run /sbin/shutdown -- like this (as one possible example):
root ALL=(ALL) ALL
user ALL = NOPASSWD: /sbin/shutdown
- Next, create a Python script to a) pop up a graphic notification that the box is shutting down, b) mail an email warning to the root system mailbox, c) shutdown the system. This script will need a text file for the email and a custom .GIF graphic.
The .GIF just has any message to indicate that the box is shutting down because of high temps. Mine looks like this:
The text file is in this format:
Subject: Warning! This computer was shut down due to high temperature!
The python script for this process acted as required automatically.
Please monitor temperature.
The Python script looks like this:
import sys # allows for direct OS command execution
import time # necessary to make program slow down if desired
from Tkinter import *
root = Tk() # The base window, a canvas.
# This inserts a graphic/logo
# .gif format req'd, jpg and png not valid data types
URL = "/home/user/scripts/hitemp.gif"
link = urllib.urlopen(URL)
raw_data = link.read()
next = base64.encodestring(raw_data)
image = PhotoImage(data=next)
label = Label (image = image)
mailcommand = "sendmail firstname.lastname@example.org < /home/user/scripts/hitemp.txt&"
shutdowncommand = "sudo shutdown -h -v +1&" # causes shutdown in 1 minute, -v optional
root.mainloop() # Done creating main window
- Now use the command python /home/user/scripts/hitemp.py as an alarm in your gnome-applet-sensors preferences:
If you prefer gkrellm as a monitor, it has a similar launch-on-condition option:
If the alarm level temperature is reached, the Python script executes: notifies the system mailbox, pops the graphic, and shuts the box down a minute later. When you turn on your Linux box later, you'll have email to the effect that it was shut down because things got too toasty inside the case, and the computer protected itself.
Wallpaper, Screensavers and Webcams, oh my!
March 6, 2014
Short post today, if for no other reason than that the story is not so exciting, but the result is nice. I use a screensaver which has a module which will pull random images from the web into a collage. That's it, that's largely all that that module does. I was looking at the option of limiting that module to a webcam shot of Paris, London, New York, wherever there is a public webcam which has a good view. For technical reasons, that came be impractical at this time, so I changed around the code I had written and came up with something different, but still nice, and actually closer to what I was picturing in any case.
Submitted for your approval, a program called Paper Shaper. It randomly pulls a JPG image from a user maintained list of webcams, OR from your offline wallpaper gallery, OR randomly from one or the other and saves it to a specific file and location. Since the file name and location do not change, it can be selected for wallpaper and updates automatically. Simple enough. Here are the very basic technical specs.
 These applications should be
available with most if not all Linux distros.
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
And here it
is: Download Paper Shaper from
On the Google Wiretapping Lawsuit
September 28, 2013
Timothy Lee writes in the Washington Post that the lawsuit against Google for scanning email for marketing purposes is without merit. I hardly know where to start a response. The reasoning in the article is replete with exactly the same bizarre thinking that made Google decide that scanning email was sound policy in the first place. Given the wealth of opportunity, I'll respond to items (with some necessary hopping around for logical consistency) in the order in which they appear in Mr. Lee's article. 
Mr. Lee writes: “To provide a useful e-mail service, Google needs to perform a number of complex operations on each e-mail a user receives.”
The problem here is the term “useful”. Mr. Lee rightly suggests virus scanning and display formatting as valid tasks for email scanning. However, he also goes on to confuse that with other so called services provided by Gmail such as indexing and searching and marketing. Let's look at these additional so-called services and see if they can be classified the same way.
-- Wholesale scanning. First, it should be obvious (although it apparently is not) that scanning an email for one purpose does not justify scanning for any purpose. This is a fundamental flaw of reasoning in and of itself. However, this “reasoning bloat” is not inconsistent with a marketing oriented internet mindset. It is hoped that this will be misunderstood by the public generally and that tasks of greater necessity will be confused with other, usually marketing related tasks, which providers would rather the average user not know too much about.
As an example, consider Mr. Lee's article itself. To access the article, I cleaned my browser's cookies and cache and went to the Washington Post's website. I followed the most direct path to Mr. Lee's article, which was 2 pages, the main page at washingtonpost.com and a hot link they provided directly to Mr. Lee's article. According to my tracking software and a manual review of the browser, loading those two pages caused 28 cookies to be set, 2 from the Post, 26 from third party vendors.
And it gets worse. Of those 26 third party cookies at 2 were from servers in Germany and Japan. Since we know from the Snowden disclosures, that the NSA monitors foreign transactions, by linking to foreign servers and causing cookies to be set, the Washington Post, the very people to report on PRISM with such outrage, tells the government what a given user is reading. Amazing.
--- Indexing as a service. As the NSA controversy has indicated, email metadata is sufficient for indexing purposes. Look at date stamp, index in inbox by date order. Done. There is no need to otherwise 'index' email for anyone. And many users do not want this 'help', Gmail itself even recognizes this reality, and this service is optional with Gmail.
-- Searching as a service. This model makes several assumptions, many of which, by the preponderance of the evidence, are unjustified and in fact it is faintly ridiculous even to be discussing them. Searching as a service presumes that Gmail knows more about what is important to the user than the user themselves. Not only is that not true, presumptuous and foolish on its face, but by the very fact of forcing this service on users rather than offering it, Google seems to tacitly acknowledge this. Searching as a service also presumes that Google is competent to perform such a task in the objective sense. Let's look at some examples of Google's objective competence.
Anyone who has ever used Google Play on an Android device knows that competence and Google are far from synonymous conceptually. As an example, I have an Android tablet. I had to acquire from third parties and manually install at least four of my most often used Android apps on my tablet because, although Google Play offers these apps in general, it says that these apps are not compatible with my tablet, and will not offer them to me. Who identified my tablet and made this decision? Google did, when I signed in to Google Play.
Sadly, the most positive thing that I can say concerning the Google Play experience on my tablet is that the tablet was less expensive than an Android phone, so I did not have to spend a fortune to discover Google's incompetence; I had the opportunity to learn relatively inexpensively. I am conflicted about that reality. On the one hand, such incompetence is naturally frustrating, and I recognize that most users are not going to be sufficiently skilled to acquire and manually install Android apps. On the other hand, given Google's philosophy that interference is fundamentally good, perhaps Google's underlying incompetence is a saving grace.
As another example of Google incompetence, I recently tried to access a specific hacker related website. I have a Master's degree in information systems, and am quite naturally interested in the technology and infosec fields. The site in question does not advocate hacking, it merely reports technical information and hacking related news stories. The site's owners have a Facebook page and Twitter feed, advertisers, bylines and references on the articles. In other words, a quite legitimate site dedicated to a specific technical specialty. Recently I clicked on a Twitter link to an interesting article and found that the site is now blacklisted.
The blocking notification page is served by Google and references my ISP. Presumably my ISP is paying Google to subscribe to this blacklist. I accessed the site in question using another free Google service which goes around Google's own blacklist. In other words, Google appears to be charging my ISP for a service they do not provide and essentially stealing my ISP's money. By extension, they would also be stealing from me, of course, but Google should feel free to keep my portion, it was worth it for the laugh. It's also another not completely surprising example of Google's incompetence.
These are the people who demand that they be allowed to do value added searching of your email.
Mr. Lee writes: “If "reading" an e-mail for ad-serving purposes is "interception" under the wiretap act, those other functions [formatting for HTML, spam filtering and virus scanning] could be illegal wiretapping, too. And that would create a huge headache for anyone who runs an e-mail service or social media site.”
Virus scanning has a couple of additional elements but is hardly difficult to understand. Virus scanners get false positives, and attack different operating systems. So some email providers warn but permit a questionable attachment download. Again, this is configured to be an optional service. Nonetheless, it could even perhaps be argued that a confirmed virus attachment can materially damage a provider's system, a not unreasonable concern. Scanning content with the goal of protecting the integrity of your servers cannot by any stretch be equated with scanning content for the purpose of targeted marketing.
Mr. Lee writes: “The problem is that Google did seek consent for advertising. Gmail's terms of service state that "advertisements may be targeted to the content of information stored on the Services."”
The real issue here is not that the lawsuit happened, but that it had to happen. That was Google's choice.
 Lee, Timothy B. “No, Gmail’s ad-targeting isn’t wiretapping.” Washington Post. September 28, 2013. http://www.washingtonpost.com/blogs/the-switch/wp/2013/09/28/heres-whats-wrong-with-this-weeks-ruling-that-google-may-be-wiretapping-its-customers/
A Tale of Two Printers (including Tricks and Counter Tricks in Windows 7)
September 19, 2013
My printer is one of those old dinosaurs which will probably still be operational at the turn of the next century. For my part, since I note that this printer was made in the days when plastic was not so thinly poured that planned obsolescence was implicitly understood, I will be hanging onto this printer just as long as I can do so. Getting it running was an interesting exercise.
The printer model is an Apple Laser Writer Select 360. Apple did not really 'make' this printer. In fact, except for an extra Apple specific port, this printer is actually an HP LaserJet III under the hood. Since I have a Linux box and a Windows 7 laptop, I did not specifically seek out an Apple printer. In fact, I took it in exchange for setting up a router for a rather attractive lady as a sort of Lady and the Tramp rolling-of-the-meatball gesture (which ended up going exactly nowhere). In fairness, I was told that the laser printer was broken, and by a near miracle I actually managed to repair it (a lot of people assume that if you know computers, you also can repair printers, monitors, phone lines, cable boxes, car stereos, etc., but as a rule I cannot repair laser printers, and don't even want to try).
Thus did I end up with an Apple printer which was sometimes not an Apple printer to run with Linux and Windows 7. Linux offers a driver for the Apple Laser Writer Select, and it set up quickly and easily. As usual, the joker in this deck was Windows 7. Windows XP included a Laser Writer Select driver, but Microsoft, in its never ending collusion to get people to buy new hardware, did not include a Laser Writer Select driver in Windows 7, nor did they include an HP LaserJet III driver by default. However, there is an extended Microsoft printer driver database which does include the Laserjet III. Here's how to access that extended driver database.
This was done in Windows 7 Professional Edition. The process includes the sort of insane backwards thinking that only Microsoft seems to manage consistently. When installing the printer, as noted, there was no driver for the Laser Writer Select nor for the LaserJet III. Making sure that the computer is online with the internet, install the wrong printer. Literally. I picked an HP printer just for the sake of making the concept as sane as such a thing could be, but since the LaserJet III was not available, I installed an HP LaserJet Something. Crazy as it seems, go through the entire installation process to install the wrong printer. Do not bother trying to print a test page, since you know that you have the wrong printer installed and the test page will hang forever then fail. Also, in my case, since I would be sharing the printer over a network, I also made sure that the Linux print sharing network was online.
Once the wrong printer was installed, under the printer's properties option, Microsoft let me change the driver, including offering an extended online driver database not offered in the original installation process. The extended database takes about five minutes to download, but includes an HP LaserJet III driver. I could then change the driver from the incorrect driver previously installed and bring the printer online with the network.
So I'm running an Apple printer on a Linux box and installed to a Windows 7 laptop as a networked LaserJet III, installed incorrectly then partially backed out. Simple, really.
Proper Thinking about Computer Privacy Models
July 3, 2013
When considering computer privacy in light of recent leaks regarding NSA data collection practices, there is some sloppy thinking going on, even among computer experts who should know better. In a human sense, this sloppiness is understandable. People want to ‘solve’ a problem. The NSA is monitoring online use, people object to it, a privacy solution is implemented, problem solved.
There are a couple of benefits to this reasoning. First, people for the most part have other things going on in their lives. Birthdays, graduations, college exams, etc. They are too busy and otherwise disinclined to play ‘Behind the Iron Curtain’ with the NSA on a semi permanent basis. They want the privacy problem SOLVED once and for all. There is also the mentality of so-called ‘computer experts’. They want to provide the solution that people want. Therein lies their expertise. They do not want to admit (or do not know) that the issue of computer privacy is never truly ‘solved’.
A good example are the huge number of articles that have come out after the news of NSA monitoring broke. The Internet has been flooded with articles examining and explaining the use of PGP, TOR, OTR, whole disk encryption, etc. Implement these, goes the reasoning, and you are all set. Computer users who for the most part did not know that these products were available, can download and install them and 'solve' the privacy question once and for all.
When I wrote an article proposing a different way of looking at privacy and why the privacy question may not be so easily 'solved' it made some people very nervous. If I made any error at all, it was to assume that computer experts would understand the privacy model I was suggesting implicitly, and not require an explanation explicitly. Therefore I present the following explicit examination of a more broad and probably more realistic definition of computer privacy.
I want to begin in the Middle Ages. An armored knight on an armored horse was a formidable weapon. Armored against attack and capable of attacking, to a knight an unarmed foot soldier was vulnerable to attack, while the knight was relatively speaking invulnerable. Therefore to the degree that you had armored knights on your side in a Middle Ages battle you had an advantage that could tip the balance in war. Let's call this model Middle Ages Battle Version 1.0.
Military strategists thought about the knight and saw a formidable armored opponent on an armored horse, and saw an effective weapon to be sure, but with some curious vulnerabilities. The knight was relatively uncoordinated, physically heavy and limited in reach. A knight could not maneuver rapidly; designed to confront other knights or sweep down on unarmored foot soldiers, such maneuverability was not necessary. A knight was heavy, knight, horse and armor for both would be in excess of 1000 pounds. A knight had to be close to his enemy to strike, and being large and heavy and uncoordinated, a more maneuverable or more distant weapon defeated the knight's strengths.
So strategies were evolved to take advantage of these perceived weaknesses. If a battle could be led to or staged in a muddy field, the heavy knight could become bogged down and a new weapon, designed expressly for the purpose could be used to unseat the heavy and unwieldy knight, who could not maneuver on foot as effectively. An archer might not be able to penetrate armor at a distance, but likewise could be placed at such a distance that the knight could not reach the archers, who could decimate the opponent's foot soldiers in relative safety. The knight while unquestionably deadly, could be defeated with an evolved strategy. And that is the critical point: Effectiveness of mounted knights became unimportant once applied methodologies were in place to defeat them.
In the Hundred Years War, the English used careful observation and thinking about the nature of mounted knights to come up with these attack vectors, while the French tended to follow the old model. To apply this to computer privacy, the French believed that they had 'solved' the issue and the English evolved their thinking in the face of the old model. There are a couple of examples of evolutionary thinking about computer privacy which demonstrate the truth of this appproach.
One example comes from computer hackers. One black hat hacker writes explicitly that “As attacks become more and more sophisticated, so do hardware and software prevention mechanisms.” In the more legitimate realm, project managers call this model the System Development Life Cycle or SDLC. One depiction of the SDLC is as a process which ends in a Maintenance phase. That is, patching and fixing vulnerabilities, etc, with the major work essentially finished. Another depiction of the SDLC is as a loop, that is to say that the Maintenance phase is more than patching and fixing, it is also gathering information regarding needs, use, effectiveness and security of the current system version with an eye to development of the next system version. In other words, in this model the System Development Life Cycle never really ends.
As we saw in the Hundred Years War, the English applied this looped model of the SDLC very effectively. They did not send out knights against knights; they employed pikes and archers and tried to direct battles to muddy fields. Similarly, there is no reason whatever to assume that the NSA is ignorant of strategy. No reason except the spurious comfort that the privacy question can be 'solved' once and for all.
Let's consider this model of the SDLC in relation to the question of privacy. I wrote elsewhere in this blog about a theoretical attack that should compromise PGP on many computer systems and open those systems which install PGP to more in depth monitoring by the NSA. I developed the theory that this would be a reasonable attack on the assumption that the NSA applies the SDLC and strategic thinking in their planning. That in the face of current privacy models which they could not breach, strategic thinking would require them to find a different approach.
Since the function of the NSA is to monitor and not to destroy an opponent, the assumption of a long term and evolving strategy applies. It is not reasonable to think that the NSA, in the face of PGP, TOR, OTR, etc., simply throw up their hands and admit defeat. They do the same thing that has been validated in military history, academia and the hacking community. They employ goal oriented strategic thinking in the model of the SDLC and find a way to change the status quo. However, they would be delighted to think that nobody believes that.
Now that we have looked at motivation thus far, we can continue on and look at a couple of options as regards methods with the next section, PGP in a Security State.
Thoughts on the Snowden/NSA Affair
June 27, 2013
Fundamental questions are raised by the Edward Snowden affair. By this time, sufficient coverage regarding the Snowden affair is available in so many venues that I will not recount the story here, except where specific details impact an examination of some of the deeper questions this affair raises.
Did Snowden commit a crime? Speaking without legal training, it appears so. He did admit that he took a job with Booz Allen Hamilton in order to obtain national security related information which he then took without authorization. It therefore seems he engaged in conspiracy and espionage. So much for the opening act. Now let's look at motives, justification and relationships, not of Snowden, who is after all only in a supporting role in this drama, but of the American government and its citizenry.
I normally object strongly to the modern tendency to excuse any act because someone else does it as well. That tends to indicate that existing in a culture of corruption somehow morally justifies the next corrupt act; it's a ridiculous and irresponsible position. However, a comparison may be useful when the same party is involved in more than one comparable act.
In 1774 the British Parliament passed the Administration of Justice Act. This law essentially said that at the colonial governor's discretion any British official charged with murder or any other capital offense could have a change of venue up to and including transfer of the trial to Great Britain. This obviously selective interpretation of law was so offensive that it came to be called one of the Intolerable Acts in the American colonies. Yet another complaint about the Administration of Justice Act was that it was passed without consent of the governed. Should law not be measured by the same standards when the victimized government also selectively interprets it?
Today, American national security law is interpreted in the same manner that the British government applied in the Administration of Justice Act. At the President's discretion, which is to say, by secret executive order, the constitutional concept of privacy is selectively interpreted as or if it conflicts with executive branch privilege. The executive branch in a security state (which speaks of the Bush and Obama administrations, lest this seem partisan) has invoked executive privilege to short circuit the legal process regarding a variety of issues. The President himself has said that there has to be a compromise between privacy and security, but has unfortunately mentioned this philosophy after the fact and after the degree of compromise has already been decided and implemented. [Another question this raises, specifically as regards the Snowden affair and national security, concerns the possibility of a fair trial for Snowden. Given the executive branch's track record of invoking state secrets privilege to the detriment of the U.S. Constitution, it is probable that any and every argument Snowden might make regarding justification would be impermissible at trial. Therefore it becomes more understandable that Snowden might be disinclined to return to the United States in the current national security environment. This is a subtlety that current press coverage of the affair does not seem inclined to consider.]
There is also the consideration of representative law. If current law is passed by representatives of the people, is that not different from the environment of the Intolerable Acts? Unfortunately it may not turn out to be the case. Granted that the legislature passed the FISA Act, that could be said to be an act representative of the people. However, when the law is extended by secret executive order and enforced nonetheless, then what 'law' is exactly becomes both unknown and not a product of the legislature. Neither this process nor the result is conducive to trust.
There are a handful of other issues to address here, for two reasons. The first reason is that I have not seen some of these perspectives anywhere else on the Internet. Nonetheless these are arguments that I suspect many people would consider. The second reason stems from the first reason: the person expressing this opinion is not without resources or effectiveness. I am speaking about a hacker known online as the th3j35t3r.
th3j35t3r has, if reputation is to be believed, hacked jihadist websites the world over, outed Anonymous members and feuded with the Westboro Baptist Church over its take on the United States military. If this is true, then we accept that th3j35t3r is technologically capable and resourceful. th3j35t3r styles himself a patriot hacker, and therefore has much to say about both the technical and national security implications of the Snowden affair.
th3j35t3r mentions Carnivore and Echelon (earlier government spying programs) and the capability of commercial smartphones to monitor users. Using th3j35t3r's own source, “[i]n 2001, the Temporary Committee on the ECHELON Interception System recommended to the European Parliament that citizens of member states routinely use cryptography in their communications to protect their privacy, because economic espionage with ECHELON has been conducted by the US intelligence agencies.” (The original European report referenced in the Wikipedia article seems to be referring to intercepted fax and telephone communications as specifically regards U.S. interception efforts.) However, the fact that some governments spy on citizens or that companies spy on customers in no way logically or morally justifies any one specific effort nor expansion of the practice.
th3j35t3r claims to be “aware of 40 foiled plots in just one year” as a result of programs like PRISM. The public is aware of one official who gave the 'least untruthful' answer in response to congressional scrutiny on the matter. (The British said it better. In response to the Peter Wright/Spycatcher affair, a British minister admitted that he had been “economical with the truth”.) This raises questions of trust and quality of life. Trust comes into play if, as has been suggested, government has used the Internal Revenue Service to harass conservatives or has read journalists' mail. Quality of life issues include whether it is better to accept a physical security risk, or risk of political abuse of an all encompassing intelligence network in conjunction with ever more sophisticated data mining processes.
Last, th3j35t3r as a patriotic hacker, above all else supports the military, law enforcement and intelligence communities “who do the same job no matter who is sitting in the big seat.” Unfortunately, we do not know that, it is illegal to tell us that, and evidence tends to suggest that the job includes at least some degree of specialized work at the request of political or commercial interests. In this context, there are long accepted issues with the doctrine of 'just following orders'. First, we have no moral superiority in the face of hacking by other countries. Second, the examples of Nazi Germany and My Lai serve as historical guides that a soldier has some duty as regards determining whether following certain orders has a moral component. In the case of an American, this could be said to include consideration of whether certain orders are blatantly unconstitutional.
This is not to say that military espionage has no place. We definitely want to know how many planes, missiles, tanks (and computers) others have and how they are arrayed against us. We want to look to vulnerabilities in our infrastructure and to that of potential enemies, either physical or cyber. The problem comes in when or if a government feels that its own citizenry might be the enemy and targets it wholesale with its considerable espionage apparatus.
It would be a shame if the political realm can turn this affair into the Edward Snowden Show and deflect discussion of the important issues. For whatever reason it happened, it has happened. How we deal with Snowden isn't actually too important in the grand scheme of things. How we as a society deal with the issues that his actions raises is critical.
 Lam, Lana. “Snowden sought Booz Allen job to gather evidence on NSA surveillance.” South China Morning Post. June 25, 2013. http://www.scmp.com/news/hong-kong/article/1268209/snowden-sought-booz-allen-job-gather-evidence-nsa-surveillance
 Avalon Project. “Great Britain : Parliament - The Administration of Justice Act; May 20, 1774.” Yale Law School, Lillian Goldman Law Library. http://avalon.law.yale.edu/18th_century/admin_of_justice_act.asp
 Liptak, Adam. “Obama Administration Weighs in on State Secrets, Raising Concern on the Left.” New York Times. August 3, 2009. http://www.nytimes.com/2009/08/04/us/politics/04bar.html?ref=statesecretsprivilege
 Spetalnick, Matt and Holland, Steve. “Obama defends surveillance effort as 'trade-off' for security.” Reuters. June 7, 2013. http://www.reuters.com/article/2013/06/07/us-usa-security-records-idUSBRE9560VA20130607
 th3j35t3r. “So…About This Snowden Affair.” Jester's Court Official Blog. June 26, 2013. http://jesterscourt.cc/2013/06/26/so-about-this-snowden-affair/
 Schmid, Gerhard . "On the existence of a global system for the interception of private and commercial communications (ECHELON interception system)." European Parliament: Temporary Committee on the ECHELON Interception System. July 11, 2001. http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+REPORT+A5-2001-0264+0+DOC+PDF+V0//EN&language=EN
 NBC News Press Releases. “NBC News exclusive: Transcript of Andrea Mitchell’s interview with Director of National Intelligence James Clapper.” NBC News. June 9, 2013. http://www.nbcumv.com/mediavillage/networks/nbcnews/pressreleases?pr=contents/press-releases/2013/06/09/nbcnewsexclusiv1370799482417.xml
Philosophy of Technology (Kickstarter project)
June 26, 2013
I just started my first project at Kickstarter. If you are not familiar with the concept, Kickstarter is a crowdfunding platform. In a nutshell, that means that hundreds or thousands of people pledge any amount that they can afford toward a worthwhile project, and cumulatively enough money is hopefully found to fund that project. Since funding comes from multiple sources, no one sponsor has to be found who can and will fund the entire project alone. There are many good projects at Kickstarter and some really strange and funny ones (Chthulu books for children seem to be rather better represented than one might expect). Crowdfunding is a way to get money for a project when traditional means might not be a workable option. For example...
My project (or proposed project, as it remains until or unless funded) is to write a book on the philosophy of technology. This is an important project as it provides a basis for examining the decisions we make about technology, privacy, quality of content, and more (the scope being on some level related to the degree of funding). With examination hopefully comes understanding and better decisions about why we do what we do.
I have heard of Kickstarter for years but I have never taken the plunge and joined before. It's a little scary if truth be told, but exciting at the same time. It's a little scary simply because it is a new direction for me. It's exciting because suddenly it actually might be possible to tackle my project having adequate funding to do so. I could never go to a bank and say “I want money to research, write and publish a work of philosophy.” Since such a thing would be so so pie-in-the-sky impossible, it only made sense to think about it abstractly, a daydream that we know cannot happen. It still might not happen, but imagine if it does.
With Kickstarter, I can at least pursue a dream, and it just possibly could happen. Imagine the awesomeness of suddenly being able to just do this project that really should be done, even though no commercial venture would ever fund it in their wildest dreams. I am not the only person out there with dreams, and whether my project gets funded or not, Kickstarter is definitely something I will follow from now on. There are always interesting projects and people to sponsor. The link to my Kickstarter project is here:
PGP in a Security State
June 18, 2013
PGP, or Pretty Good Privacy, encryption software for email has existed since 1991. From the time that PGP was first released, it has been under a variety of different forms of attack from an American government generally opposed to any communications that they could not read. The Washington Post recently examined why, if so effective, people do not more readily adopt encryption like PGP. Difficulty of use and immediacy were key concerns cited. Security of the PGP model was not seen as a cause for concern.
Since 1991, computing power has increased significantly. The 128 bit encryption standard used in online commerce has been broken in an academic setting. PGP encryption, offering the option to generate keys well in excess of a thousand bits if desired, would seem to be an as yet uncompromised method for secure email communication. That model may not be the case any longer.
For this examination we will look at several factors which may work, or be made to work in conjunction to together compromise PGP encryption. For our examination we will flesh out the requirements of a theoretical virus to handle the technical aspects of PGP compromise. We will examine the necessary properties of that virus, and determine whether the requirements to create and distribute such a virus are workable within the bounds of current technology and social and corporate access enjoyed by intelligence agencies, based on what is currently publicly known.
Cracking a PGP key in excess of a thousand bits would be a resource intensive task. It would require considerable computer power and even if a regularly reliable process, would tend to interfere with currency, in other words, it would presumably take some time to crack every encrypted communication netted using brute force techniques. Yet the focus on the security of PGP keys can also be a weakness of PGP. If your keys are secure, goes the wisdom, so are your communications. Given the focus on security of keys, let's assume that users' keys would tend to be secured, and bypass the need for possession of keys entirely, while also avoiding the resource requirements of the brute force approach to cracking encrypted communications.
PGP keys must be stored on a desktop or server associated with the user. PGP keys are identifiable by certain structural characteristics. Our properly tailored virus should scan a computer for the presence of PGP keys, wait until a piece of text is about to be encrypted or decrypted and copy that unencrypted text in the computer's buffer immediately before encryption or immediately after decryption. In other words, if the user feels it is sufficiently important to encrypt or decrypt a piece of text, the virus feels that text is sufficiently interesting to make a copy as well. This approach produces the result that the user expects to see since the PGP software itself operates normally with our theoretical virus operating externally to it, while completely bypassing any concern with possession of, or access to, PGP private keys.
Our theoretical virus developer should also infect every installer of PGP on every server that he can reach, anywhere in the world. We want to do this so that every user who installs PGP also activates our theoretical virus at the same time. We also want to do this in order to automatically put every computer which installs PGP into the NSA's surveillance net for any other use of the target computer. Several technical and legal characteristics of computer systems facilitate this attack vector.
Software installers on public servers are overall less hardened; they are made to be found and accessed. If Chinese military hackers can regularly access more hardened private servers the world over, access to relatively less secured and publicly accessible servers should be even less difficult. The best publicly available information is that the NSA has a working relationship with major software vendors which provides them with data on operating system and security software vulnerabilities unavailable to the public. So our theoretical virus would more easily stay out of commercial virus scanner definition databases. Even considering that there are foreign based anti-virus providers to whom this relationship may not apply, the Stuxnet virus remained unidentified for a long time even without the cooperation of software security vendors.
If this seems technologically daunting thus far, it's not. The Stuxnet virus operated by identifying specific characteristics of the machines it was able to access, including selecting target machines by geographic region. The Stuxnet virus was both modular and an American creation, which further fulfills requirements of a dual purpose virus and ease of development. If, as believed, Microsoft and Apple are sharing information about operating system vulnerabilities with the NSA, this further facilitates development and distribution of our theoretical virus. Therefore our virus can not only capture PGP activity by the user, it also advises the virus maker of PGP activation on that local machine who can then can further fine tune aggressiveness or search criteia based on the location of the user.
Using Linux may not increase security against our virus. While our virus may not be able to effectively operate on a Linux system, end to end encryption requires the effective use of encryption software on the sending and receiving ends. In the scenario of our customized virus, if Alice runs a security conscious configuration of the Linux OS and encrypts securely, but Bob does not use Linux and is infected by our theoretical virus, the security of the communication is compromised at the decryption point in the overall transaction regardless of the security of Alice. Since in excess of 90% of the world uses an operating system other than Linux on the desktop, this is a significant attack vector. Therefore, not only may PGP be able to be compromised, it may be able to be compromised in such a fashion that a false sense of security is provided, even among users with good security practices.
In theory it would still be possible to use PGP securely even given the existence of our theoretical virus. You could use Alice for offline encryption/decryption. Alice never goes online. Bob does go online for transmission/reception. Now, how do you get the encrypted/decrypted content to/from Bob without connecting to Alice? Bluetooth, flash drives (Stuxnet's specialty) can be compromised. Connecting Alice to Bob over the network, in fact any electronic means, could potentially compromise Alice. You would have to do this:
Encrypt on Alice. Print a hard copy of the encrypt. Scan the hard copy into Bob with OCR software for transmission. For received encrypts, the same in reverse: Print a hard copy on Bob, scan onto Alice with OCR software for decryption. Of course, to prevent contamination completely, that means two scanners and printers as well.
While this might work, in practice most Americans are not likely to go to that length for security; the scenario starts to feel a bit like living in a Tom Clancy novel. Additionally, one of the key characteristics of the American model of online communications is immediacy. Intricate security processes take time to execute, which runs contrary to the concept of immediacy. Also, as above, this approach would only be effective assuming best practices on the part of all parties to the communication.
Similar models for security are suggested by more knowledgeable computer users which make use of virtual machines and other exotic configurations. As with the more extreme scenario, problems include lack of immediacy, and technical knowledge beyond that of the average end user. In addition, even knowledgeable computer experts will admit that they do not know the abilities of nation state actors, and cannot therefore, certify the security of the virtual machine model, whole disk encryption, etc.
It should nonetheless be considered that anyone involved in a criminal, terrorist, or other similar enterprise may well feel that security is more important then immediacy. Granted such reasoning, a nation state attack targeting encryption may produce false positives both in the sense that it unnecessarily captures more mundane communications while at the same time missing the most crucial ones. Thus the false sense of security regarding the security or vulnerability of PGP may apply to nation state actors as well as end users.
 Zimmermann, Philip. "PGP Source Code and Internals". MIT Press. 1995. http://www.philzimmermann.com/EN/essays/index.html
 Lee, Timothy B. “NSA-proof encryption exists. Why doesn’t anyone use it?” Washington Post. June 14, 2013. http://www.washingtonpost.com/blogs/wonkblog/wp/2013/06/14/nsa-proof-encryption-exists-why-doesnt-anyone-use-it/
 Wainwright, Oliver. “Prism: the PowerPoint presentation so ugly it was meant to stay secret.” Guardian, UK. June 12, 2013. http://www.guardian.co.uk/artanddesign/architecture-design-blog/2013/jun/12/prism-nsa-powerpoint-graphic-design
Repetitive Motion Injuries and the Computer Mouse
June 9, 2013
injuries are the product of any activity which is
repeated on a long term basis over an extended period of
time. Examples were first documented among meat
processing workers who performed the same slicing
motions over and over hundreds or thousands of times per
day, and in fact can result from any motion repeated
over an extended period of time. This includes the
use of a computer mouse over a long period of
time. I am not a doctor, and the following should
not in any way be construed as medical advice, but I can
say from personal experience that the following provided
noticeable results when I tried it.
I had one
non-negotiable rule as I began. I would not go
into the computer settings and program the mouse for
lefty button use. Like with a can opener or
playing cards, the reality is this: the majority of
computers are programmed righty and either one does not
have the systems level access to program the mouse on a
work or public computer, or it is discourteous to
reprogram the righty mouse on a friend's computer.
Instead, went my reasoning, since I could not mouse
lefty at that point anyway, and since mousing protocol
is largely social programming of the user in any case,
it would be no more difficult to learn to mouse lefty
with a righty programmed mouse than if I did reprogram
the buttons, and, without reprogramming the buttons, I
was in a position to quickly and easily switch off on
any computer anywhere and at any time. (For this
reasoning I drew on the experiences of a couple of other
lefty mousers I have known who have reprogrammed their
buttons for left handed use, and it causes them, and
people who use their computers, no end of frustration.)
Text and That Link (tweet2html.py)
May 25, 2013
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
Share this on witter or acebook.
Heads Will Roll - The Obama IRS Scandal
Let's examine a couple of actual examples from workers in one data and communications services company specifically with regard to the difference between what appears to be the policy and processes and what actually happens at the operational level. Capital P Policy certainly existed at this company, it was comprised of many hundreds of pages covering everything from billing to technical support. Since a Policy exists, therefore, goes the wisdom, there is no room for ambiguity or error. That assumption is a serious over-simplification, as a couple of quick examples should demonstrate.
In the first example,
this company's Policy stated that technicians were not
permitted to use any external resources or information
not in the official technical wiki to resolve technical
issues. Yet at the same time this company had a
new product line which was poorly documented and on
which the technical support staff was even more poorly
trained. One day a consultant showed up from the
home office and talked at length to three specific
technicians at one site. These techs weren't in
trouble, but the home office really wanted to know how
they had a 97% resolution rate on the new product line
while the rest of the site averaged slightly under
30%. The answer was that the in house wiki
was not sufficient or at least not well enough organized
to resolve tech support issues in most cases, so these
three technicians brought knowledge to the table beyond
the wiki, only using the wiki as one of various
resources, technically a violation of Policy since it
could result in inconsistency in the technical support
experience, whatever that means.
However, it is worth
noting that the company did not have an official channel
to suggest changes or a culture which encouraged low
level technicians to suggest changes or to do anything
except put in their workday and collect their
paychecks. There was no technical wiki revisions
point of contact, there was no way of recording
documentation and forwarding it for analysis, and on
site management was not technologically
knowledgeable. Last, in a stringently numbers
oriented production environment, there was no time for
supplemental activities such as writing revised
In the second example,
Policy said that referring customers to outside vendors
rather than resolving customer issues directly was
inefficient, frustrating to customers, exorbitantly
expensive to the company, was to be avoided in all but
the most extreme cases, and could impact a technician's
metrics, pay and their continued employment.
However as implied above the in house technical wiki was
somewhat lacking. A handful of the top technicians
addressed this conflicting Policy by using a closely
guarded process to access a hole in the corporate
firewall, through which outside vendor websites and
wikis could be accessed. Of course, since this was
prohibited, it could not be referenced as a
resource. Since it could not be referenced as a
resource, it could not be suggested for assessment as a
practical solution to improving resolution
numbers. (It should also be noted that this
scenario left a hole open in the corporate firewall for
at least a year after its discovery, which helped the
technicians even as it left the company itself more
So, in light of certain
realities in a certain type of production environment:
let's consider the IRS
scandal from a worker's perspective. As a low
level IRS worker, you may:
and Personal Responsi-woo-hoo (on
Reverse Social Darwinism)