A site devoted to discussing techniques that promote quality and ethical practices in software development.

Tuesday, November 11, 2008

Who's afraid of new technology?

Recently, Rupert Murdoch, the chairman of News Corp, is asked to present the 2008 Series of Boyer Lectures. The second lecture bears the title of this blog message, contains something worth remembering and pondering:
But technology will do you no good unless you have men and women who know how to take advantage of it. That leads me to my second point: the growing importance of human capital. In other words, an educated and adaptable population.


That's because computers will never substitute for common sense and good judgment. They will never have empathy, either. To be successful, a business needs people who see the big picture, who can think critically, and who have strong character.

Economists call these skills 'human capital'. You won't find this capital listed on a corporate balance sheet. But it is the most valuable asset a company has. If you talk to any chief executive about his number one challenge today, he will probably not say technology. It's far more likely he will say his top challenge is attracting and retaining talented people.


If you are a worker, you have an even greater incentive to invest in yourself....

My point is this: as technology advances, the premium for educated people with talent and judgment will increase. In the future, successful workers will be those who embrace a lifetime of learning. Those who don't will be left behind.

That may sound harsh. But it is a truth we must face. And it is a great opportunity for us all.

For most people, adapting to the changes that are coming will require moving out of comfort zones.

Moving out of comfort zones begins with education.

Sunday, October 5, 2008

Is it wise to break convention - the wildcard convention

I have been a long term user of 7Za.exe, the command line version 7-ZIP, and recently have been using it to back up my Subversion repositories. Like many that use this kind of programs, one is too trusting to believe that it will honor the same convention as other CMD commands to archive all files, with the exception of those files in used, when one issues a command line like this:
7za a -tzip Test.zip MyWork\*.* -r
To many long time users of Windows Dos Prompt, this is the standard way to instruct a program to process all files; the Del, Attrib, Copy, XCopy, Cacls, Dir, RoboCopy, and Rar all conform to this convention and hence one naturally assumes that 7Za.exe would naturally honor this.

Not so, and I learned this the painful way. According to the help file of 7-Zip:
7-Zip doesn't follow the archaic rule by which *.* means any file. 7-Zip treats *.* as matching the name of any file that has an extension. To process all files, you must use a * wildcard.
Well, while it is admirable for someone to make a partial stand of this 'archaic rule', it is dangerous to use the same syntax and then silently producing a different result set. It is like switching the active and neutral wires in the electric wiring just because someone took exception to the color coding convention of the wire.

The fact remains that before 7-zip comes along to Windows, that convention has already been established and entrenched, way before it was mistakenly claimed by 7-Zip as introduced in Win 95. There are millions of users accustomed to this convention whether it is logical or illogical; this has become their second nature; it is like arguing whether it is logical or illogical to drive on the left hand side or right hand side of the road; a lone group of dissenting drivers taking a stand can not only play havoc on our roads but producing fatalities.

A convention has been established and people driving on public road has therefore to conform to it, like it or not. In US, all vehicles driving in a mine drives on the opposite side to those on the public road. The change of convention was explicitly stated, for reason sound in mining operations, and drivers are deliberately made to go through a change over section.

7Za however, did not do such a thing. It took *.* to mean 'all files must have extensions' in contrast to the Windows convention which means 'all files with or without extensions', which is understood by millions or trillions.

It is not a dispute of 7Za for being correct. It is raised here for its dangerous and irresponsible practice of flaunting a convention while using the same syntax.

For example,
7za a -tzip Test.zip MyWork\*.* -r
Should pick up all files in a Subversion repository regardless if the files contain extensions of not; many Subversion files do not have extensions. Rar with this command picks up all files honoring the Windows convention:
Rar a -r Test.rar MyWork\*.*
This is not only wise but also responsible realizing the consequence for failing to and taking a stand only brings at best hollow victory and wrath from users at the worst; such is the case with 7za now.

According to the Windows API and treatments of wildcard characters, there is no way to specify collecting files only with extensions. For example using Dir as an example:
Dir *.* /s /b
Dir * /s /b
produce the same result: a list of files with or without extensions. The second form is 7Za's way of specifying any file with or without extension but accepting the first form and producing a totally different result set.
Dir *. /s /b
produces a list of file without extensions. But there is no wildcard syntax to say 'file only with extension'.

As a result of 7-ZIP hollow stand against a convention entrenched in more users of Windows than 7za, it has successfully dislodged people's trust on this program. Archiver should behave much like a copy/xcopy commands; changing the operation with the same syntax is extremely dangerous and developer should not toy with this kind ideological stand in an important tool.

My ill-placed trust on 7za has caused me losing the collection of my Subversion repositories. It is an expensive loss and 7Za's ideological stand against an illogical convention is equally illogical resulting in real loss; what is 7-ZIP hoping to achieve? It has hardly won any friend!

The result is a total distrust of this tool. While I can use Subversion's command to ascertain the integrity of the restored repository, other archives produced by 7Za lack such detection and hence there remains an unknown number of imperfect archives.

I don't discourage 7za to take this kind of admirable stand but it should be selected by users. The default should always follow the convention of the OS in which it is deployed into. 7Za has already established this kind of overriding switches/options and its admirable attempt to correct the convention should only be selected when user makes the choice.

If that is a Unix convention, then either build 7Za to be a Posix conforming program that runs in Windows' Posix subsystem, in which case 7Za can even use case-sensitive file names or have a switch to turn on Unix convention.

This should be a real-life example of the danger of developers failing to conform to entrenched convention, no matter how 'archaic' or illogical that is. The first occupancy rule applies here!

For me, 7Za is now being banished to the recycled-bin as it is too dangerous to use tools that do not conform to convention. It will not earn my recommendation for sure.

Thursday, September 18, 2008

Why does software suck - WSJ's latest format

While WSJ's latest update of its web pages are a welcome sign, it adds one very damn annoying feature when showing the indices and the graph as shown here:

This represents the worse design I have ever seen, namely:
  • There is no obvious UI clue of which index that graph represents. If you peer hard enough you might see the first index is of different colour to the rest and that indicates the index that graph represents. What's wrong with a dot or a symbol next to it.
  • If you want to see the graph of another index, you have to hover the mouse over the index. If you are using a mouse that is fine but if you are using a Tablet PC, it is a pain in the neck as that often brings up Market Data Center. A terrible piece of design.

Sunday, September 14, 2008

Using Com Aggregation to help Web Services

Some times ago, I was consulting for a project for a friend that tried to use the SOA approach to expose functionality to the client using Web Services. The functionality is currently implemented as a bunch of .Net components that uses a COM connector component, provided by the vendor of their remote system; pretty much like Microsoft Dynamic AX connector and SAP connector.

Instead of having these components deployed to the desktop, they wanted to place them on a IIS and exposing them as ASMX web services. Things worked fine but performance sucked.

They approach the vendor for help and was told to redevelop the entire stuff using the vendor preferred Web Services. Sounds familiar? That meant tossing out years of work.

So I had a look it and recognise that it was the classic problem of Apartment mismatch with the Service component living in MTA while that STA connector is hosted in a STA resulting in serializing all calls to the STA. I therefore recommended them to follow my recipe for constructing multiple-STA servers.

In a haste, I suggested them to incorporate the interop assembly for the aggregator as well as that of the aggregatee and to alter their construction logic, assuming their connector is called SXServer and the aggregator is called AggregateSXServer, from

IServerObject svr = new SXServerLib.SXServerClass();
   IServerObject svr =
(IServerObject) AggregateSXServerLib.AggregateSXServerClass();

This worked like a charm alleviating the performance bottleneck except that I now believe that may be I do not need so much changes. There are a couple of techniques I can investigate.

Using Com Emulation
There is a technique that may offer a possibility of enlisting the aggregator without needing to touch the client code. This is known as Com Emulation technique. This technique allows silent redirection of CLSID_ORIGINAL to CLSID_NEWONE by means of a special registry key in the COM registry called TreatAs.

So it sounds like the right stuff to allow me to redirect coclass creation for CLSID_SXServer to CLSID_AggregateSXServer without requiring the rebuild of the web services components. However, it appears that this technique does not support aggregatable component. Even when creating object for IUnknown, the use of TreatAs returns E_NOINTERFACE. Weird.

Anyway that can be left for another day's challenge.

What is the minimal change required in the .Net solution?
So the next investigation is what is the minimal amount of changes require to the .Net solution. In the past, I recommended the inclusion of the interop assemblies for the aggregator as well as the aggregatee. Do I need all that? I know at a minimum I need the interop assembly of the aggregatee because I need to use those interfaces. Do I need that of the aggregator?

It turns out not if we are happy to replace the use of new operator and replacing that with the Activator call. So instead of
   IServerObject svr = new SXServerLib.SXServerClass();
We can change it to:
  IServerObject svr =
( Type.GetTypeFromCLSID
( new Guid( "6E537ABD-96B0-4BBC-9F1C-00924FBF1FB1" ) ) );
This is change is more than cosmetic as it can:
1) let you switch dynamically by providing that Guid as a string from anywhere: from config file or other means. Kind of home-made COM emulation.
2) It also means that you do not have to include and ship the interop assembly of the aggregator. A big plus!

Hence this is purely code changes and no need to add anything to the project references. Why I didn't think of this in the first place?

Of course when we approached the vendor with our solution, they were less than please because this had ruined their chance of selling their Web Service to my friend; no more upgrade fees, etc.

This kind of attitude is a clear sign of customer-neglect and exploitation. Instead of helping their users to make the best of their product, they were out there to take advantage of their customers ignorance of technology. Shame.

My help, utilising the COM technology and requiring no proprietary materials have, now stopped the exploitation dead in its track allowing my friend to continue with their SOA initiative without incurring great loss of discarding current implementation. This is a good design and provides an invaluable service. Any fool can recommend a rebuild approach as that does not need in-depth understanding of technology and design; recommending rebuild is a no brainer.

Incidentally, there is nothing in SOA definitions that say COM cannot be used to implement a SOA provider. After all the whole premise of SOA is to hide the implementation infrastructure. Requiring someone to up root one's design to chain in a vendor's SOA smacks a total misunderstanding of the SOA design philosophy and explotation. Don't you think so?

Tuesday, September 9, 2008

Who needs any other spyware?

With one big malware that's the browser, called Chrome, who needs any other one!

Monday, September 8, 2008

The most rewarding experience for a developer

Recently two events happened that brought home the truth.

A few weeks ago, I read an interview with the noted Software Guru, Donald Knuth, titled "The 'Art' of Being Donald Knuth" in CACM July 2008, Vol. 51, No. 7, page 36-39, in which he said:
If you ask me what makes me most happy, number one would be somebody saying "I learned something from you". Number two would be somebody saying "I used your software."
When I read this, I thought would I be so lucky to experience the 'Number two' item so dear to Donald Knuth, which I wholeheartedly shared?

The second event took place last week. Out of the blue, not surprisingly uncommon, I received a phone call from my telecommunication provider using the guise of checking my account detail and services correctness trying to sell me more services.

On the other end of the line was a very happy lass who was skillful in not getting upset despite my not so unfriendly exchanges and my unwillingness to buy more services.

I gave my reasons to her and one of those was that I once worked for that company. She politely asked me where I worked, meaning the business units. So I obligatorily gave her the details and mentioned in passing that I was one of the original developers of a program called DRIFT.

When she heard that, she was so ecstatic that she drew the attention of her colleagues to listen in wondering what the fuss was all about. She was so glad that she was actually talking to a person who has developed the tool she is using and obviously appreciating it. Otherwise she would use this opportunity to pour out the complaints. I protested that I did not do it alone but part of a team.

As a developer who is fortunate enough to be allowed freedom to develop/architect something that a large corporation used as a mission critical application is one thing but having someone telling me that he or she is using it to do a job is something else.

I have finally experienced the same experience that matters so greatly to Donald Knuth. I can tell you how rewarding it is, the best accolade!

What is so amazing is that this tools was invented way back in the late '80 and early '90 and still in used and still has no peer. It stands the testimony of time with good design. Of course, it has been attended to it since I left the organisation some 10 years ago by a competent team otherwise it would be gone to disrepair denying me the chance to experience this.

It made my day but I still did not buy anything from this lass.

Saturday, August 30, 2008

Difference in Word Counts between Microsoft Word and Open Office

Recently, I was alerted to cases where word counts, often used as assignment benchmark, produced by Microsoft Word and Open Office appear to differ when analysing the same document.

How much do they differ or is there trend? To understand this phenomenon, a number of Word documents are used; some have extensive diagrams while another is a pure text produced from a very large document containing diagrams, tables etc.

Here are the results:
There does not appear to be any trend from these data samples and hence more investigations are required.

It is interesting and educator relying on the Word Count may have to be aware of this issue as more and more students are using Open Office.

Feature useable to users as well as Virus writter

I have always maintained this autorun feature is a questionable design feature and asking for trouble and now has been used to attack NASA's space station.

Sure you can get the Anti-Virus to scan it before launching it but why tempt fate.

A convenient feature to a users will be of equally convenient to Virus/Trojan attackers.

Turn it off.

Saturday, August 9, 2008

Why software fails

According to this article:
The biggest tragedy is that software failure is for the most part predictable and avoidable. Unfortunately, most organizations don't see preventing failure as an urgent matter, even though that view risks harming the organization and maybe even destroying it. Understanding why this attitude persists is not just an academic exercise; it has tremendous implications for business and society.
Among the most common factors:
• Unrealistic or unarticulated project goals
• Inaccurate estimates of needed resources
• Badly defined system requirements
• Poor reporting of the project's status
• Unmanaged risks
• Poor communication among customers, developers, and users
• Use of immature technology
• Inability to handle the project's complexity
• Sloppy development practices
• Poor project management
• Stakeholder politics
• Commercial pressures

If the software coders don't catch their omission until final system testing—or worse, until after the system has been rolled out—the costs incurred to correct the error will likely be many times greater than if they'd caught the mistake while they were still working on the initial sales process. And unlike a missed stitch in a sweater, this problem is much harder to pinpoint; the programmers will see only that errors are appearing, and these might have several causes. Even after the original error is corrected, they'll need to change other calculations and documentation and then retest every step.
In fact, studies have shown that software specialists spend about 40 to 50 percent of their time on avoidable rework rather than on what they call value-added work, which is basically work that's done right the first time. Once a piece of software makes it into the field, the cost of fixing an error can be 100 times as high as it would have been during the development stage.

If errors abound, then rework can start to swamp a project, like a dinghy in a storm. What's worse, attempts to fix an error often introduce new ones. It's like you're bailing out that dinghy, but you're also creating leaks. If too many errors are produced, the cost and time needed to complete the system become so great that going on doesn't make sense.

But this is 'the commercial approach' where companies are all too happy to attempt to manage bug reports from customers slavishly. Furthermore,
software developers don't aim to fail...we need to look at the business environment, technical management, project management, and organizational culture to get to the roots of software failures. Chief among the business factors are competition and the need to cut costs. Increasingly, senior managers expect IT departments to do more with less and do it faster than before; they view software projects not as investments but as pure costs that must be controlled. Political exigencies can also wreak havoc on an IT project's schedule, cost, and quality.

The follow advice should be heeded by all in software development:
Organizations are often seduced by the siren song of the technological imperative—the uncontrollable urge to use the latest technology in hopes of gaining a competitive edge. With technology changing fast and promising fantastic new capabilities, it is easy to succumb. But using immature or untested technology is a sure route to failure.
Bad decisions by project managers are probably the single greatest cause of software failures today. Poor technical management, by contrast, can lead to technical errors, but those can generally be isolated and fixed. However, a bad project management decision—such as hiring too few programmers or picking the wrong type of contract—can wreak havoc.
In IT projects, an organization that values openness, honesty, communication, and collaboration is more apt to find and resolve mistakes early enough that rework doesn't become overwhelming.
Even organizations that get burned by bad software experiences seem unable or unwilling to learn from their mistakes.
Here is the "Hall of shame" of software and interesting to see a fair share of shame is with ERP software.

Tuesday, July 29, 2008

Code Review Checklist - why every one wants to write one?

In my previous blog, I have report some of the recommendations from Robert Glass on Software standards and how they should be developed.

Today, I was asked to review a Code Review check list for .Net development submitted by a development group. Granted that Code Review Checklist is not the same as standards but more like a form of guidelines it should be written at least by someone that knows the stuff.

After spending some time reviewing the contents, which is made up of something like 30 items, I was horrified at such a low quality. Clearly the author of this check list has only had superficial knowledge of .Net. One example to show the naive level of knowledge is the demand that all string comparison should be performed in lower case. Clearly this person has not read this article. Another is the incorrect exception handling technique to look for in this checklist.

Some of the mistakes made, like method and parameters naming conventions, give an idea that the author's background is really in Java.

It begs the question why someone wanting to write a check list when FxCop has already contained one far far more extensive that this novice has been able to muster. There is even a book on the guidelines. So why bother?

In fact, the first item of the check list should demand a clean run of FxCop or VSTS Code Analyzer. Other check list items should be more about advanced issues like couple metrics, design patterns and complex issues, any unit tests and are they constructed in the spirit of Unit Test, etc.

The check list, as Robert Glass said,
should be written and reviewed by the best programming talent in the shop, not by whomever happens to be available at the moment.
This is not the first time nor will this be the last for me seeing this kind of mistake.

Tuesday, July 1, 2008

Software Standards and how to develop and use it properly

Over time, I have come across many attempts by software engineers from different development shops or organisations and have formed an opinion that many of these attempts are not only poorly compiled but often containing dangerous information. They are better served by adopting world recognised publications but there are very few that are well written and of International standard quality.

I have never been able to find any publication to back up my observation and opinion until I come across the chapter titled "Standards and enforcers: Do they really help achieve software quality?" in a book called "Softeware Conflict 2.0 - The Art and Science of Software Engineering" by Robert L. Glass.

This chapter not only supports my opinion but also clear my doubt of whether or not adherence to software standard produces quality software by his explanation that:
Standards are a narrow subset of the total issue of quality, and although it would be nice to establish the definitions of quality in terms of conformance with a sufficient set of standards, it is simply not possible. Software quality is defined in terms of attributes such as portability, efficiency, human engineering, understandability, testability, modifiability, and reliability.... you will quickly see the software quality cannot simply be legislated, it must be performed. Efforts to measure software quality in terms of standards conformance are doomed to letting poor-quality software slip through undetected.
While conformance to standards alone does not necessary produce quality software, he has found:
They do indeed help software quality and software productivity. But they must be done right. There are many pitfalls along the way, and many people aren't doing them right.
So how do you do it right? Here are his recommendations:
First of all, standards should be terse and to the point. The must rules for writing software, no matter what the installation, should be distilled into a short manual which can be digested quickly, applied easily, and enforced conveniently..... Guidelines, not standards, should be established. This document may be as long as it needs to be, because it contains helpful hints which most programmers will want to read, and length will not be a problem since there is no need or even intention of enforcing these rules.

Second, standards should be written and reviewed by the best programming talent in the shop, not by whomever happens to be available at the moment.
Third, enforcement of standards should be mandatory, not something done if there happens to be time.... Enforcement processes should include automated enforcers where possible (these are sometimes called code auditors), and the use of reviews as necessary where automation is not possible. Peer code reviews are one important way of doing standards enforcement, although the main focus of such a review should be on quality as a whole, not just standards enforcement.
Finally, Glass has this warning to Quality Assurance group:
Often quality assurance organizations will perform standards enforcement activities, and pronounce the software to be of high quality.

Wednesday, May 21, 2008

Microsoft still does not understand security.

It is good to see Microsoft is trying to force their user to give up their love for running Windows in Administrator's account.

But I wonder whose fault is that that induces Windows Users' to fall in love with running in Administrator's account? Certainly not Linux and Unix nor Mac as they do not set up user's account with Administrative rights.

As I have blogged previously, Microsoft could have nipped the bud when it was beta testing Windows 2000 which began to enforce profile security. Microsoft all along has not provided any developer's assistance to toe the line. Instead it makes its installer to set up users with Administrative privilege.

Microsoft's representative seems to contradict himself with comment like this in defense of not following the Linux/Mac modus operandi:

"Least privilege permissions are a part of a good defence-in-depth strategy but it's not the endgame. If everybody is logged-in not as admin or not as root, it is really not going to stop the malware in the long run ... malware is not going to disappear," Grimes told AusCERT delegates.

Grimes added malware could infect a computer using various attack vectors but if the user is not an administrator, the attacks are generally less dangerous.

"Can a malware program steal your password if you are not an administrator? Can [criminals] create a program that waits for you to log into your bank, authenticate and then take all your money? The short answer is, yes, absolutely," he added

No one is suggesting not running at root account is going to prevent malware attack absolutely or the disappearance of malware. Running in LUA, reduced the attack surface and it makes the attack harder to implement as acknowledge by Grime.

Another instance to demonstrate Microsoft does not understand security and still breeding an army of developers with a myopic view of security.

Recently we have discovered that developing strong name assembly in Vista using Visual Studio 2008 requires the IDE to run as administrator. It does not require administrative right to do the same task in XP Pro despite running in LUA. One naturally begs to ask why?

Not only is this stupid but downright dangerous. The reason is that the developers are writing code that could do all sort of stuff a standard users cannot. This results in program that does not run in XP LUA but runs in Vista with the support of UAC redirection. I would have thought Microsoft would not only encourage but demand developer to write code requiring the Least Privilege.

It is causing the same very problem that Microsoft is trying to stamp out. It is a bit late to close the gate when the horse has bolted.

Looks like Microsoft only half-hearted attempting "to break away from its tradition of users being an administrator by default."

Monday, May 12, 2008

Using Runas to run Windows Explorer

I have been using the recommendation from Keith Brown to launch an instance of IE6 from the Admin console to perform administrative task when running in LUA.

Unfortunately IE7 cannot run like this and hence I have refused to use IE7.

Thankfully, the unintended usage of "Launch folder windows in a separate process" in Windows Explorer discovered by Aaron Margosis does the job equally well.

Saturday, April 12, 2008

Strange logic to fix Microsoft's security problem - by annoying the users

When Vista was first released, Apple poked fun at Windows Vista for annoying the PC users with a humorous yet deadly accurate commercial. The sad part is that the message conveyed by Apple turns out to be 100% spot on and has now been admitted publicly by Microsoft's Product Unit Manager, David Cross in a presentation in the RSA Conference that they "put UAC into the (Vista) platform was to annoy users--I'm serious"

This has to be one of the most stupid act I have seen as a long term Windows developer to annoy the user to get the developers to change "the ISV ecosystem; applications are getting more secure. This was our target--to change the ecosystem." Microsoft produces SDK, tools and the Operating System and you should use those tools to change the ecosystem.

There are two statements from David Cross in this report that need a thorough analysis:
  1. Most users had administrator privileges on previous Windows systems and most applications needed administrator privileges to install or run.
  2. "We needed to change the ecosystem," said Cross. "UAC is changing the ISV ecosystem; applications are getting more secure. This was our target--to change the ecosystem"
Mr Cross has shown poor comprehension of figures in his rebuttal that "a myth that users blindly accept prompts without reading them."

Anyone with elementary arithematics will tell you that "Seven percent of all prompts are canceled. Users are not just saying 'yes.'" is another words of saying 93% of all prompts are NOT canceled. Then that user must be saying 'yes'. Hence "It's a myth that users click 'yes,' 'yes,' 'yes,' 'yes,'" is not a myth as substantiated (93% says 'yes') by his own research with opt-in users. With that level of intelligence it is not hard to understand why they had to annoy users rather than using innovative and technically brilliant ways to fix their problem.

Has he ever heard of using tool and assistance as the way to change them? What about changing Microsoft's own practice, which encourages that ecosystem in the first place, since Windows NT 3.1. Sadly, the same practice still persists in Vista. Now I will come to analysis the above statements.

In relation to the first statement, "Most users had administrator privileges on previous Windows systems" and I wonder whose fault is that?

Ever since Windows NT 3.1 was released, it had a security model, albeit not a very tightly enforced one. The separation of privilege account, such as Administrator, and lower privilege ones, now fashionably called Standard User account, existed.

Microsoft has never, even till today, followed what Linux and Unix, do in which Windows NT should never make the user a member of Administrator to maintain a clear separation. In NT, it up to you to create additional users and many could not bother. Hence most people just lives in the Administrator's account forever. This breeds the bad habit, that Mr Cross now finds objectionable, that has been encouraged by Microsoft in succession of Windows, even in Vista.

Microsoft should force the creation of an ordinary User's account and this should be the account user uses. Sure, they need the administrator's account to install and that's difficult to avoid. But at least they should never encourage such a bad practice in the first place. I must confess that I was a great defender of this rather silly and dangerous practice for a long time, until I read Keith Brown's message, which opened my eyes and changed my attitude and practices.

I have had lively discussion with developers and ISV in relation to adopting this good practice to ensure one's code demands the least amount of privilege and that one should be aware of the security model. The results of some of those discussions have been published in this blog.

The tightening up of the Windows Security Model was first introduced in Windows 2000, with beta version years ahead of year 2000. Windows 2000 is already obsolete and over those years, Microsoft has not provided any tools to help ISV to track down those violations. Many are oblivious violations have been committed because they are running in administrator's account. Microsoft Visual Basic 6 is a classic example where it will not run properly in LUA (Lease-Privilege User Account); it needs administrative rights to run. Microsoft provides SDK, Resource Kits and various developer's tools to enable people to write Windows programs. Microsoft could easily provide diagnostic and debugging tools or add support into their premium development tool, the Visual Studio, to help ISV and developers. Yet not one single effort has been released.

Microsoft could have add an audit support into Windows 2000 to allow users or developers to detect any malpractices; similar to the auditing the 'Privilege Use'. Yet no assistance of this kind was provided.

Now in relation to the second statement, "We needed to change the ecosystem", Microsoft has not yet demonstrated a change at all in their Windows Setup practices. I have yet to see a Vista machine whose owner is running with a Limited User Account. All the installations I have seen have been set up with users' accounts belonging to the member of Administrators.

When users are members of administrators in Vista, they are running with a split token, according to Microsoft's publication - a token used for all operation contains Standard User privilege and a token with the privilege rights, which will be used once consent is given, known as "Consent Elevation". This is known as Administrator Approval Mode (AAM). The token with the privilege rights is to permit the caller to perform "Consent elevation because the user simply has to approve the assignment of his administrative rights."

According to this paper on UAC:
Elevated AAM processes are especially susceptible to compromise because they run in the same user account as the AAM user’s standard-rights processes and share the user’s profile. Many applications read settings and load extensions registered in a user’s profile, offering opportunities for malware to elevate.

Even processes elevated from standard user accounts can conceivably be compromised because of shared state....

The bottom line is that elevations were introduced as a convenience that encourages users who want to access administrative rights to run with standard user rights by default. Users wanting the guarantees of a security boundary can trade off convenience by using a standard user account for daily tasks and Fast User Switching (FUS) to a dedicated administrator account to perform administrative operations.
This acknowledge that the elaborate scheme introduced in Vista is as susceptible to attack as the good old ActiveX installation consent process, which has been extensively exploited by drive-by-downloader, Spyware and Phishing attackers.

By annoying the customer as orchestrated by Mr Cross and forcing user to "reading them", his expert has concluded that these annoying dialog boxes
say nothing about what it will do when it executes. The executable will process command-line arguments, load DLLs, open data files, and communicate with other processes. Any of those operations could conceivably allow malware to compromise the elevated process and thus gain administrative rights.
Even with my technical knowledge, I cannot tell if consenting to the request is wise.

It does not make sense to annoy the customer. It does not make it any safer. Microsoft should help ISV and developers by giving them tools and logging facility. You have an Event log and Eventing support in Vista (don't get confused with Web Service eventing); use them!

The bottom line is: I am not against Microsoft attempting to "to change the ecosystem" and I am all for it. But to use something to create annoyance to the users as a way of forcing the ISV or developers, many of them are ignorant of the security model, is to ask your users/customers to do the work for you - unpaid of course! Microsoft you are abdicating your technical leadership.

Microsoft can and should immediately, as it is long overdue, to provide tools to help developers to locate these problems. At the moment, I have frequently use FileMon, RegMon or Process Monitor, to track down violations and all these tools are free from Microsoft. All Microsoft needs to do is to expand these tools capability to home in on the violations and report to the developers.

The current implementation of virtualizing privilege access to file and registry systems without any alerting system to the developers or users is just sweeping the problem under the carpet. In fact, such strategy often create confusion for those machine shared by many users as the Windows ceases to have Machine-wide (only one instance) and Per-User settings. All users are then permitted to alter their Program Files data or HKLM registry data as these accesses are virtualized to the user's profile area.

Microsoft has failed to bite the bullet to produce tools to help the developers. Rather than using AAM and telling "developers must assume that all Windows users are standard users, which will result in more programs working with standard user rights without virtualization or shims." why not do something about it? Why not made your Windows users standard users after Windows set up? After all it is your operating system. No wonder Apple pokes fun at you and rightly so!

Monday, April 7, 2008

Brisbane Transport Smart Card tries to out smart its customers

Recently, Brisbane introduced its transport smart card system called Go Card that comes with its sets of rules and additional benefits to entice the Brisbanites to give up the extremely generous paper ticket to use this system.

There are no shortage of blog messages alleging that this is an underhand way of getting more money out of the Brisbanites. Is this true? Is it trying to out smart its denizen?

Analysis using their published fare table for train journey confirms the allegations are well founded. For example, the followings are based on 2 journeys per day:
  1. For travelers buying monthly tickets, Go Card saves you money if you travel 4 days or less.
  2. It seems Go Card is targeting less frequent traveler or those using public transport on ad hoc basis. In this case, the never-expire store value compared to finite expiry day associated with paper ticket is a definite attraction.
  3. It seems Go Card is set to break even on Monthly ticket price and that the traveler travels 5 days, except for distance (>10 zones ) travelers. This is ignoring the money for Go Card's deposit.
  4. For people living more than 3-zones, the 3, 6, and 12 months ticket is more cost effective than Go Card.
  5. For people living in 3-zones or less, Go Card is more cost effective with 6, or 12 months ticket.
Of course, the Go Card cannot match the benefit of paper ticket that allows you to travel unlimited trips within the same zone coverage. If you rely heavily on public transport, paper ticket is way to go. The Go Card becomes really expensive the more you travel irrespective of your ticket purchasing pattern.

So it seems the government is trying to use the convenience sales pitch of Go Card to out smart the consumers and to discourage people to use public transport.

Thursday, April 3, 2008

Well done - Sony BMG has been caught pirating

What an irony for a company that gave the root kit an elevated awareness by planting it in their customers' machines in the name of protecting its IP has now been caught literally with its pants down.

Well done and I applaud PontDev not to settle and to give Sony BMG a swift kick in its backside and to humiliate Sony BMG.

Monday, March 31, 2008

Some worrying experience with TrueCrypt 5.1

With a recommendation from Bruce Schneier, how can I not to give this Open-Source on-the-fly encryption tool a try.

In summary, my confident in this tool is severely dented and tested. It is not its encryption capability but in its operations:
  1. When it goes bad, it appears to lock up the machine and the machine seems to misbehave becoming uncooperative. Ever since I had my XP Tablet PC (512M running XP Pro with minimal AV running on demand and is set up to run LUA) for over 2 years, I cannot remember I have to force reboot it so frequently until I am testing TrueCrypt. It almost wears out my reset button on my tablet.
  2. Often, it is not a lock up but for some unknown reason it tkes a long time (~4 minutes) to allow the machine to come out of a comatose state. There is no sign that it will come out of this comatose state. All one can do is to wait and pray.
  3. When the file container is on a Network Share, copying files out of it produces some very worrying result. The only way to get the PC back to normal is by rebooting it.
  4. Unreliable in dealing with waking up from hibernation and running TrueCrypt.exe. It leads to cases where it cannot reconnect to the device driver in Traveler mode. As said, it is not happening all the time but the pattern leading to this has not yet been identified.
Since it is a tool that is designed to protect the most precious information, it should function as reliable as NTFS file system driver or even the Network driver. It should be rock solid and cannot misbehave like the scenarios I am elaborating below.

The facts of not knowing when it will fail, when I need to restart my machine as it cannot deal with waking up from hibernation, or when it locks up my machine have dented my confidence in this tool. The unknown is killing the confidence.

Mounting File Container from Network Share.

In this usage scenario, two PCs were involved. The remote machine, an XP Pro, contained the file container and a client machine, another XP Pro machine, was used to mount this container. The was never a chance that multiple client was mounting the same container. The exercise was to copy the entire contents, a directory tree full of PDF and chm files, from this container to a directory on the remote machine.

Each file in the container was not large (~24M) but there were many. XCopy command was used rather than the Windows Explorer.

After copying almost 80% from this container (2G), the client machine suddenly reported Delay Write Failed, Event ID 26 & 50. After that the Network Share was inaccessible though I could connect to it by IP address. The remote machine had not gone down at all as I could use RDP (Remote Desktop) to access it.

What was strange was that in Windows Explorer on the client machine it was showing the directory containing the container on the remote as being compressed and the file container was missing,

But the RDP session confirmed otherwise. After rebooting the client PC, the directory containing the file container on the remote machine and the container file were shown as not compressed.

What had gone wrong? Why was the Delay Write Failure causing the client machine to misbehave so badly.

Using it in Traveller Mode.

On my tablet PC, situations had been encountered when I tried to restart TrueCrypt.exe after the machine came out of hibernation only to encounter the following dialog boxes,
The report from System Information confirmed that TrueCrypt.sys had been stopped and hanging around.
This forced me to restart the machine defeating the purpose of hibernation.

I had been diligent to ensure all mounted volumes were dismounted and TrueCrypt.exe was terminated properly prior to put my machine into hibernation. What was causing this problem? Was it because the machine was in LUA? But then again, why prior situations of coming out of hibernation did not cause this problem?

The disturbing fact was that the machine has gone into hibernation and out of it for a number of times before encountering this problem. So far there did not appear to be any pattern to cause this problem and this unpredictable behavior was far worse than outright failure.

Lock up but technically not a lock up, if you wait long enough.

There were operations with TrueCrypt.exe or interactions with the mount drives that caused the Tablet PC (the Desktop XP did not seem to be affected by this) to appear to lock up. The behavior or parts of Tablet XP that seemed to be affected were:
  • the Windows Explorer, even the File Open Dialog box was affected when invoked in another program, such as MSPaint.
  • Process Explorer appeared to be unable to delete process,
  • inability to use Windows Explorer. Command line operations unhindered unless it was querying a mounted drive's directory information.
Initially, I thought my machine froze as I could not even delete the TrueCrypt process and had to cold boot the machine. Later, with patience waiting for a few minutes (~3-5 minutes), the lock state was finally released.

During that time, the CPU was just ticking over with plenty of reserve around and memory was not in demand. So what was happening? I suspected the TrueCrypt driver was the culprit.

Why was the problem so severe that I could not do anything but wait and for how long? Everything seemed to be held back until TrueCrypt came out of comatose state. The was totally no feedback to the user. Often I did not start Windows Explorer relying solely in command prompt/PowerShell until all volumes were mounted. Still the lock up affected the desktop and start button. Terrible.

At other time, when I either mounted a volume or dismount one, it took an inordinately long time (~3-4 minutes) before TrueCrypt came out of the operation. So long that the XP ghost window came into effect. This meant the TrueCrypt was blocked in a method call unable to reach the GetMessage() call to clear the Windows Messages. Sometime it was quick to mount but terribly long to make that drive accessible. Initially I thought TrueCrypt and/or my machine was frozen but patience was the key ingredients to get it working. This was not good enough! I could mount a USB drive in seconds and reliably every time. All my file containers were on my hard drive and so it should have better response than USB drive.

At the moment, I am only using TrueCrypt for testing until my confidence is restored and that its behavior is more predictable.

Tuesday, March 25, 2008

Get WSJ articles for Free

The other day, I was shown of a technique to get WSJ articles for nothing. Apparently WSJ releases some articles to allow Google to present them in the News section.

There are several issues with this technique:
  1. There are some links on www.wsj.com that are only accessible to subscribers and if you can't activate those pages, you cannot see the titles of those articles nor can you bring up portion of those pages. For example, the link for 'US Business' is only accessible to subscriber.
  2. The technique seems to be back to front under utilizing the power of Google's search engine.
Surely, there is a better way and indeed there is and below is a better way:
  1. Open your browser and go to www.google.com
  2. Then click on the 'News' link.
  3. On the News' page, click on 'Advanced News Search'
  4. On the Advanced News search option page, on the edit box for 'Return only articles from the news source named', type 'Wall Street Journal'
  5. Then press the Google Search button and you'll be presented with a list of WSJ free articles.
This is using the power of Google Search! If you are having trouble following those instructions, just click here for the list of articles.

As a paying subscriber to WSJ I can compare the completeness of the articles returned by Google and what are available to subscriber only. I can tell you that what's released for free is a small area and that paid subscription entitles access to many facility and data.

Still the availability of some articles which are free makes it easier to share information with others without violating copy right or subscription conditions.

Tuesday, March 18, 2008

CodeGear Delphi 2006.Net's TRegistry fails in Framework 2 SP1

An error has been detected when TRegistry.ReadString in Delphi 2006.Net is promoted to run in .Net Framework 2.0 SP1.

The error is the result of coding error in Borland's VCL.Net library code that is manifested into data corruption caused by Microsoft's tightening the compliance rule to conform to Unicode 5 in Framework 2 SP1.

This article will pin-point the exact cause in Borland's code. It is an very common coding error that has not been picked up in code review and the .Net Framework of the past has chosen to ignore that mistake. Thus masking out the coding error.

The VCL bug causes the use of TRegistry.ReadString() to return a string that has an additional Unicode character of value 0xFFFD appended to the end. This is the Unicode's standard replacement character whenever the encoder detects an invalid Unicode Character. The use of this character is the default action in the .Net Framework.

It is worth noting that Microsoft.Win32.RegistryKey.GetValue() for REG_SZ data does not produce this error and is not affected by the installation of Framework 2 SP1.

Let's begin the code review from TRegistry.ReadString(), which can be found in Borland.Vcl.Registry.Pas line 546.
function TRegistry.ReadString(const Name: string): string;
Len: Integer;
RegData: TRegDataType;
Buffer: TBytes;
Len := GetDataSize(Name);
if Len > 0 then
SetLength(Buffer, Len);
GetData(Name, Buffer, Len, RegData);
if (RegData = rdString) or (RegData = rdExpandString) then
SetLength(Buffer, Len - 1); // <<--- Line(A) - The mistake. // .... end;
The coding error is located in the SetLength() as indicated. To understand why this is a mistake, we need to refer to the PInvoke declaration for the registry access function RegQueryValueEx(), which is the corner stone for GetDataSize() and GetData().

The declaration can be found in Borland.Vcl.Windows.Pas, line 21,265 and is reproduced in part here:
[SuppressUnmanagedCodeSecurity, DllImport(advapi32, CharSet = CharSet.Auto, SetLastError = True, EntryPoint = 'RegQueryValueEx')]
function RegQueryValueEx(hKey: HKEY; lpValueName: string;
lpReserved: IntPtr; ..... ): Longint; external;
According to MSDN documentation for CharSet.Auto, this declaration causes all strings to be marshaled as 2-byte Unicode strings and that it will be calling RegQueryValueExW variant of the RegQueryValueEx function.

According to the documentation for RegQueryValueEx(), the data returned from calling RegQueryValueExW() for type REG_SZ is a 2-byte Unicode string and the 6th parameter should contain the length of the string
If the data has the REG_SZ, REG_MULTI_SZ or REG_EXPAND_SZ type, this size includes any terminating null character or characters unless the data was stored without them.
Also worth noting that the unit of this parameter is in bytes and not in characters. Therefore for a 2-byte Unicode string, this value is always even.

Now returning to Line(A) above. Since Buffer is of type TBytes, which is an array of bytes, if one subtracts 1 from the length of Buffer that is even, this will produce an odd number of bytes. The end result is in producing a nonsensical UTF-16 Unicode string, which is expected to compose of even number of bytes. Now, instead of ending with 2-bytes of zeros, the UTF-16 null terminator, the string now contains an odd byte of zero, which is clearly not a valid UTF-16 character.

According to the knowledge base article:
the trailing NULL byte was removed. However, now the NULL byte is converted to the Unicode replacement character.
As a result, a string returned from TRegistry.ReadString() was like this, for example, "C:\Program Files" now becomes "C:\Program Files\xFFFD" or in appearance like this "C:\Program Files�"

In conclusion, the extra character tagged onto the end is the result of Framework 2 SP1 highlighting the programming error in VCL library. As mentioned, Microsoft.Win32.RegistryKey class does not have this kind of mishandling in all versions of framework. It is not a bug in Framework 2 SP1.

If you have Delphi 2006.Net program, it is therefore recommended that you include an application configuration file containing the <supportedRuntime> element that constrains your application to run only in Framework 1 as a safety measure. Apparently this bug has been rectified in Delphi 2007.Net.

Thursday, February 28, 2008

No excuse for using misleading message.

There are two major share registrar companies in Australia, namely ComputerShare or LinkMarketServices. The former one is well built and its reports are comprehensive but I can't say too much of the latter, which is the theme of this topic.

If you want an example of a commercial enterprise hell-bend on misleading their customers, you need not go any further.

The Link updates its system during time when people in the Eastern Seaboard (about 11pm EST) are still up and definitely those on the Western Seaboard are just finishing off their dinner. During its updates, it has no visible sign to inform the user that it is doing an update and that the system is running with reduced capability. Perhaps the developers are too lazy to deal with error message properly. The update time is not published in their web site prominently.

Worse still as you will see, they are using totally misleading message to tell the user that he/she cannot access the data. Wouldn't a phrase like "System in Maintenance Mode. Features unavailable"? Instead this is what they throw up:

This is after they let you successfully logged into your account and when you click on one of the shares in your portfolio. This is a very dangerous messages. It could indicate system data corruption because you are being informed that the share you have selected has incorrect information. Who change this between now and several hours ago, where you could view the materials? What should the user do?

When I first encountered this message, I was about to delete the share and to re-enter.

You get the same message when you enter a new one. Hence you do not know if the details in the one that you've just entered are truly invalid or because the system developers too damn lazy to inform the users appropriately that the maintenance is in progress.

Perhaps the code in this system is in a mess and that exception handling is left to chance rather than methodologically dealt with.

Misleading users is just another form of bugs in the system. Link has updated its web site recently but apart from the sugar-lolly coating, it is still primitive and buggy underneath. The small consolation is that instead of misleading user with the session times out etc, it is now giving me a much shorter and equally misleading message.

Why software users have to put up with this kind of sloppiness and buggy rubbish. Is it trying to project a 24X7 operation but only skin deep?

As a comparison with ComputerShare, I have yet to be misled. This shows that a good share registrar web site without misleading the user is possible.

Saturday, February 23, 2008

Open Office 2 - coming of age?

I have been testing Open Office on and off and I must say the Open Office 2.3 is pretty impressive.

I am not here to give free advertisement extolling its specialties or features but to flag areas that are still raw:
1) Dialog box modality is still a problem. The program fails to disable the owned window allowing user to click or access the owned window when the dialog box is supposed to be the top level window. For example try Tools | Options... and while the dialog box is up, click on the resize button on the windows behind it. In a well designed Windows program, you cannot click onto the one behind it.

This weakness is the result of Java which still has great deal of difficulties in handling this simple UI task.

2) It is great to see a portable version of OpenOffice that I can take it with me on my USB drive. It only takes about 250M.

Sadly, it does not support Tablet PC's Tablet Input Panel (TIP). Without this floating panel, the only way is to use the Input Panel from the task bar, which has the side-effect of causing every visible window to resize.

Otherwise, it is quite steady and has not experienced any crashes. However, I must admit that I have not given it a reasonably large document to chew on.

Monday, February 18, 2008

Resistance of knowledge

It is amazing some of the topics discussed in a book by Ed Deming [DEMING] book first published in 1982 talking about factory production quality issues are as relevant today in IT industry as in car production factory then.

One of his observations, which unfortunately is still plaguing the IT industry is the "Resistance of knowledge" in an 'knowledge' industry. Deming observes:
"There is a widespread resistance of knowledge. Advances of the kind in Western industry require knowledge, yet people are afraid of knowledge. Pride may play a part in resistance to knowledge. New knowledge brought into the company might disclose some of our failings. A better outlook is of course to embrace new knowledge because it might help us to do a better job."
So true and precise in his assessment done so many years ago. Imagine if this simple act of obstinacy is eliminated by swallowing one's pride, how much greatness would be accomplished with so little expenditure! So little process needs to set in place.

IT industry is more prone to this kind of resistance because of the players' characters. As a result, one has to be vigilant against this disease to prevent it from blinding oneself.

I have a collection full of examples to substantiate this observation.

[DEMING] "Out of the Crisis" by W. Edwards Deming, MIT Press 2000

Thursday, January 24, 2008

2 Billion dollars to pay for ignoring sound principle

The recently exposure of the attack on the Dutch Transit System's electronic ticketing system should be a lesson for anyone contemplating implementing any form of security into their environment and system.

As Ed Felten dissects and analysis this mess, he concludes that:
Unmasking of the algorithm should have been no problem, had the system been engineered well. Kerckhoffs’s Principle, one of the bedrock maxims of cryptography, says that security should never rely on keeping an algorithm secret. It’s okay to have a secret key , if the key is randomly chosen and can be changed when needed, but you should never bank on an algorithm remaining secret.

Unfortunately the designers of Mifare Classic did not follow this principle. Instead, they chose to combine a secret algorithm with a relatively short 48-bit key.
This kind of disaster would have been less likely had the design process been more open. Secrecy was not only an engineering mistake (violating Kerckhoffs’s Principle) but also a policy mistake, as it allowed the project to get so far along before independent analysts had a chance to critique it. A more open process, like the one the U.S. government used in choosing the Advanced Encryption Standard (AES) would have been safer. Governments seem to have a hard time understanding that openness can make you more secure.
Perhaps the organization that designs and implements this system has been warned internally by people who is aware of this kind of principle, which can be found in any cryptography text, but chooses to ignore it. This is not an unusual reaction in many software organization.

Many manager also have the view that if you can program in one area of expertise you can program in any area.

I have encountered so many muttering like this: We can't crack this key or reverse engineer it, so it must be secure!

Ed Felten correctly identifies the other failure is the lack of checks and inspections in a system of this magnitude and importance. I am wondering how they can now argue that Inspection would cost their project more. This is a classic example that using Inspection (using subject experts of course) would have save $2 billion!

Wednesday, January 23, 2008

Exaggerated piracy figures admitted

An article appearing on WSJ titled "Piracy Figures Restated" (subscription required) reports that:
In a 2005 study it commissioned, the Motion Picture Association of America claimed that 44% of the industry's domestic losses came from illegal downloading of movies by college students....
Now the MPAA, which represents the U.S. motion-picture industry, says "human error" in that survey caused it to get the number wrong. It now blames college students for about 15% of revenue loss.
He says 3% is a more reasonable estimate for the revenue at stake on campus networks.
Talk about exaggeration! 44% and 3% are poles apart. Even blind Freddy can tell of the such a huge discrepancy. Of course MPAA sits comfortably with using region locks etc schemes to prevent legitimate owner of their materials from playing in different region.

Sunday, January 13, 2008

Do not follow the sign blindly

Don't following this sign if you want drinks and food unless you have strange taste:

Blog Archive