Monday, December 17, 2007

Lesson from a collapsed games company

Recently, a software games company in Brisbane, Australia, called Auran, announced that it has called in the administrator and a large number of staff laid off. Among the somber news of a collapsing company, several revealing comments from its Chief executive in explaining the collapse that should serve as a real-life evidence to support principles that have been widely known in software engineering text book, but conveniently ignored by many:

"Certainly we spent far too much developing the game. We spent $15 million and basically it came down to too much rework . . . and even then we didn't get it quite right in the end," he said.

"If you overspend creating your product and you run out of money and you release it unfinished you end up closing your studio down."

Let this be edged in any manager's mind and as a warning beacon when he/she is tempted to rush out product with a naive invincibility that he/she can come back to fix them up later.

Friday, December 14, 2007

Watch out when using Form Derivation in VCL.Net

This is another case where it's important to examine the implementation in greater detail than accepting the fact on the face value.

Both VCL.Net and Win Form, including Delphi Win Form, support form inheritance. But their implementation and deployment support models are poles apart.

In Delphi, you must ship the source code of your base forms to your customer in order for them to derive from it and to author their derived form in the IDE. In order to view each derive form in the IDE (D2007 included), you must first open its ancestor form in the IDE. Otherwise you'll receive am error dialog box.

This is only the requirement of the IDE. The framework works fine at runtime if the controls on the derived form are created and rearranged dynamically at runtime.

Contrasting this to Win Form, even Delphi Win Form in D2006, there is not such requirement to ship source code. To derive from a form packaged in another assembly, all you need is to reference that assembly and that's all to it.

So it's important when comparing features to look deeper than the glossy brochure.

IT managers blamed for making staff sick

Can agree more with this finding:
the most widely experienced management styles in the IT sector are reactive (45 percent), bureaucratic (38 percent) and authoritarian (24 percent) -- styles that can all have a negative impact on workers' morale, productivity and even health.

....those which seek to empower staff and encourage a supportive and open workplace culture -- are better for business as they can boost staff morale and productivity. ..."Where cultures are more innovative or more proactive, there's generally greater motivation in organisations."

More than a third (37 percent) of organisations that are performing well have "accessible" management teams, whereas more than half (56 percent) of declining companies display bureaucratic tendencies and a quarter have a "secretive" environment.

Bad management can also be blamed for workplaces where a "sick-note culture" exists.

JVC's MOD file is actually a MPEG-2 file.

Recently, I have acquired a JVC hard disk digital video camera and it creates files with .MOD extensions.

So I am interested to find out what kind of files they are.

Searching the internet, I found this great tool called GSpot. Not only does it tells you what format (video + audio) plus a plethora of other technical data, it also informs you if you have the codec to decode it. This tool quickly identifies the .MOD file is actually a MPEG-2 file and that my machine does not have the codec for it.

My friend recommends me to use the K-Lite codec suite. Once the suitable codec is installed, renaming the .MOD to .MPG allows me to play it on Windows media player.

Wednesday, December 12, 2007

Vista Security one year on - still not tools to help developers

Recently there is an article published that discusses various security features of Vista and how they are accepted.

With respect to the UAC, the author has this to say:
One reason for the condemnation is that many administrators believe that a lot of legacy applications are programmed to have free reign over the system; truth is, however, they end up not being compatible with Vista.

Windows professionals require elevated permissions to perform elevated tasks. Those tasks are more difficult when administrators are treated like common users. Hence, more criticism.

Even though User Account Control can be annoying at times, I think Microsoft had no choice -- it had to create this feature. Windows XP had such a bad reputation in regards to how easily it could be infected with malware that Microsoft made sure Vista was designed in a way that would prevent malware from taking over the system.

Well, some of the argument the author used is totally wrong.

There is no excuse for developers at the end of year 2007 not to be aware of the Windows Security Model that was first released with Windows 2000. If they are unable to learn by now and most I dare to assert aren't even aware of the security specification, the OS should simply banish them.

XP is given such a bad name because these developers are not well educated. Their ignorance influences other users to turn off the security model. I once questioned a well-known accounting software package why it required Administrative Rights to run and what privilege operations that needed such high level of demands. Of course, I was told it needs Admin rights. On deeper investigation, it turned out the software company was more interested in protecting its license than to protect the users machine and data by using a licensing protection software technique that obviously was poorly developed as that was the part that demands administrative rights.

Developers should not shoulder all the blame for the creation of this mess, Microsoft should share a major portion of the blame in several areas:
1) Microsoft attempts to make things easier for ordinary users by opting to allow user to run wild with no security - the easiest way out. Have you ever seen a Windows installer that when adding a new user defaulting to non-Administrator accounts?
2) Why isn't there any auditing of violation? or interception of invalid calls and logged them into event log. At least this can allow developers to have a chance to see any invalid or security violating calls.
3) Microsoft built the tools that most developers use to develop Windows software and why that tool does not have any debugging hooks to watch for this kind of violation or excessive demand of rights? And in the latest version still does not have any. In .Net 2, the debugger can pick up cross thread UI calls when using WinForm. It is at all possible. Just look at how Vista virtualizes some of this calls that would be invalid in XP running in limited users account. If it can virtualize those access to HKLM or protected folders, why can't it write an event log entry or picked up by debugger?

If Microsoft has attacked this problem when Windows 2000 is first released, by now there would not be a need to waste so much resources just to help out ignorant and arrogant developers who not only are not conversant with the security model but also are not using the operating system properly.

I have been doing all sorts of development in LUA (Least-Privilege User Account) for years now and have not encountered any issue. Sure I have to enlist Administrative mode to do certain task but in most other time, LUA is fine. The most annoying things are software package that fails to comply with LUA from large software development house, some are documented in this blog, that you would expect them to know better.

Why doesn't Microsoft host the 'Hall of Shame' of security violators much like the Hardware compatibility list?

Friday, November 16, 2007

Choose your random number generator carefully

Bruce Schneier has just released his overview of the recently released US Government's official standard for random number generator.

His disclosure of a trap door found in the Dual_EC_DRBG is something of a worry:
This is scary stuff indeed.

Even if no one knows the secret numbers, the fact that the backdoor is present makes Dual_EC_DRBG very fragile. If someone were to solve just one instance of the algorithm's elliptic-curve problem, he would effectively have the keys to the kingdom.
And his recommendation to selecting a random number generator:
My recommendation, if you're in need of a random-number generator, is not to use Dual_EC_DRBG under any circumstances. If you have to use something in SP 800-90, use CTR_DRBG or Hash_DRBG.
I wonder what is the purpose behind all these.

Thursday, November 15, 2007

This can't serious, can it?

Someone asked a very simple question, how could I produce a PDF document from my Word document and this was one of the answers:
in the office, why not just print the word document and then using the printer, scan it and email the document to yourself, the default format is PDF.
I am not sure if the person is offering this in jest or what but I can tell you that this person is involved with technology at a senior position. That is scary.

Friday, November 9, 2007

What do you call a Windows Live Writer that does not write

The Internet is buzzed with the excitement that Windows Live has finally came out of the beta. So I installed the Messenger, Mail & Writer to see what's the fuss about the Writer. I have been a long time user of MSN messenger and WLM .

What a total disappointment when WLW can only be used with a keyboard.

Perhaps someone has forgotten to tell WL developers that there is a Microsoft Product called Tablet PC.

Looks like they have repeated the same mistake as in Windows Live Space where that blog site is hostile to tablet users, the very reason I deserted it for blog spot. Firefox is a far better editor for me on blog spot and need no Windows (Dead) Writer.

Wednesday, November 7, 2007

New report shows file-sharing does not harm music industry at least in Canada

It is good for everybody, industry or music lovers, to see this kind of studies. It can only be good for those open-minded people. A recently released report by University of London for Industry Canada studies the impact of file sharing on the music industries.

Of course, there are always people who refutes this with no publication of solid data. That's human nature. In this world, it is either put up or shut up. Comments like this:
It's not rocket science to work out that if you get your music for free, why would you go out and buy it.
I have also reported else where musician's view towards this issue and they have not found that file-sharing tracks is hurting their income as their income does not derive from CD sales. Rather, their main source of income is from concerts and live performance. This kind of music sharing can be good for their as kind of advertising.

Many predicts the introduction of eBook, electronic libraries, such as 24X7 and O'Reilly Library, and the Internet spells the death knell for physical books - why would anyone want to buy a physical book when one could read it mostly free? The same argument as presented above. It is interesting to know that sales figures released and available on Google have disappointed these doom-sayers and the contrary actually has happened.

It is an interesting finding:
In the aggregate, we are unable to discover any direct relationship between P2P filesharing and CD purchases in Canada. The analysis of the entire Canadian population does not uncover either a positive or negative relationship between the number of files downloaded from P2P networks and CDs purchased. That is, we find no direct evidence to suggest that the net effect of P2P file-sharing on CD purchasing is either positive or negative for Canada as a whole.
[...]
However, our analysis of the Canadian P2P file-sharing subpopulation suggests that there is a strong positive relationship between P2P file-sharing and CD purchasing. That is, among Canadians actually engaged in it, P2P file-sharing increases CD purchasing.
[...]
we find some indirect evidence that price influences CD purchasing, as the variable capturing the motivation to engage in P2P file-sharing because of the perception that CDs were too costly was negatively associated with CD purchases.

Tuesday, October 23, 2007

Delphi in-proc server registration/unregistration code has incomplete coverage

I have just discovered the DllRegisterServer() and DllUnregisterServer() code located in Delphi's ComServ.pas file for the ComServ unit lacks complete coverage of COM usage. It is not entirely a bug in a sense. It only means that it does not cater for all situations permitted by their language framework and supported by their IDE and COM.

However, if you are in that situation, you will not be shown any visible sign other than to discover the interfaces you are publishing are not registered. OleView.exe can show you the lack of result.

Description of the problem

When you create an ActiveX project in D2006, the IDE basically generates a plain old DLL and in Delphi's parlance, a library. What it does is to export the 4 required COM In-Proc server functions, DllRegisterServer(), DllUnreqisterServer(), DllCanUnloadNow and DllGetClassObject(). The implementations of these functions are found in ComServ.pas file.

Now if you then include a type library, you can begin to define interfaces in this library. This DLL, while devoid of any implementation, is of great significance to a COM-base solution as other in-proc or local servers can implement interfaces published in this registered type library. There is no common tool, definitely not from Microsoft, to register type library (tlb) and hence it is customarily to embed this interface only type library in an in-proc server that can be registered with DllRegisterServer() and unregistered with DllUnregisterServer().

When you do this, the D2006 produced interface only COM in-proc server will not register the type library and its interfaces as well performing the unregistration process.

RegSvr32, the Microsoft standard in-proc COM registration program, dutifully reports the information reported by DllRegisterServer() and DllUnregisterServer() supplied by CodeGear's code.

Where is the problem

It has been identified that this is caused by a crack in the design and implementation code in ComServ.pas. The implementation is based on a very narrow usage scenario, perhaps in quest of efficiency.

CodeGear assumes an in-proc server always has implementation code, known as coclass, that implements interfaces described in the type library. However, this scenario is not enforced in the IDE. You can describe as many interfaces as you like in the type library without one single coclass and the IDE nor compiler complaints.

In the CodeGear narrow usage scenario, the code in ComServ.pas expects the IClassFactory implementation in the coclass, found in the unit's initialization section generally in the form of the TAutoObjectFactory.Create(), responsible for loading the type library. This then has the flow-on effect of setting ComServer.FTypeLib in ComServ.pas.

Since the unit initialization sections are executed prior to any user code, by the time TComServer.UpdateRegistry() is called, TComServer.FTypeLib is not nil and the type library registration (unregistration) function will then be called.

However, in an ActiveX, whose sole existence is to publish interfaces, the above scenario is not realized and hence by the time TComServer.UpdateRegistry() called from DllRegisterServer() or DllUnregisterServer(), the TComServer.FTypeLib remains nil.

This situation is not considered as a bug in the UpdateRegistry() and dutifully returns S_OK resulting in fooling the user.

Incidentally, code review of Delphi 3's source code shows the same incompleteness and thus expecting the same malfunction.

Work arounds

The work arounds are listed from the most preferred method to the least.
Correct the code and embedded ComServ.pas in your project
The best way is to take a copy of ComServ.pas from CodeGear's source directory and include that into your project. It is worth removing the declaration of using ComServ in your uses statement in the DPK prior to adding the customised ComServ.pas. Failure to include this file will not bring in the fixed code.

You only need to fix the DllRegisterServer() and DllUnregisterServer() as follows:
function DllRegisterServer: HResult;
begin
Result := S_OK;
try
ComServer.GetTypeLib; // **** Added
ComServer.UpdateRegistry(True);
except
Result := E_FAIL;
end;
end;

function DllUnregisterServer: HResult;
begin
Result := S_OK;
try
ComServer.GetTypeLib; // **** Added
ComServer.UpdateRegistry(False);
except
Result := E_FAIL;
end;
end;
You only have to ensure that the type library is loaded prior to the calling of TComServer.UpdateRegistry() and hence simple addition as marked above is sufficient to rectify this problem. It only introduces slight inefficient if the CodeGear anticipated scenario is realized. As a word of optimisation, one could move the call of TComServer.GetTypeLib into the TComServer.UpdateRegistry(). But these functions are hardly frequently called functions, such operation is not really warranted.
Add a dummy coclass into project
The next best solution for those not wanting to tamper with CodeGear's code is to create a dummy coclass in a unit. This unit will then include the TAutoObjectFactory.Create call in the initialization to support the scenario expected by CodeGear. At this moment, I have not explore whether or not this coclass can be made as ole non-createable to prevent code from outside this DLL from creating it via COM API, such as CoCreateInstance().

The presence of this coclass can confuse users as that coclass will show up in tools like OleView and you then need to document its reason for existence.

This represents a compromise to a clean design.
Only good for development - use the Component Install facility
This is not really a solution as such but rather a desperate move to get them registered so that you can begin to develop with those interfaces.

This technique requires one to use the "Component | Install Component ... |Import a type library" facility available in the IDE to register the type library. Since this technique does not call DllRegisterServer() and hence it can register the type library.

However, in a deployment situation, installer relies on the invocation of DllRegisterServer() and hence this technique offers no solution in deployment scenario. Furthermore, if DllUnregisterServer() fails to unregister the type library and the interfaces, this technique does not have its complementary operation.

Tuesday, October 16, 2007

Post Delphi Studio 2006 installation experience

With the replacement of my old trusty work horse machine, I have to go through the ritual of reinstalling all the software packages that I use on daily basis.

One of them is the Delphi Studio 2006. With the new machine that is relatively clean in terms of LUA conformance it is also a good opportunity to see how well D2006 behaves in LUA. On my previous machine, it seemed fine but I might have been less stringent with the conformance.

As required, D2006 was installed in an Administrator account. This is an interesting fact to remember. It is not in an Administrator console. I actually logged into the Administrator's account to install. The installation went pretty flawlessly.

After that I logged back into my normal account, which is an LUA, things got interesting. With ProcMon and ProcExp configured to monitor BDS.EXE, I was ready to fire up Delphi Studio and here are the problems encountered:
1) During the start up, bds tried to copy "<.Net 1.1 Framework SDK Dir>\bin\lc.exe to the "C:\Program Files\Borland\BDS\4.0\bin" as lc.dll. Of course this is futile as I am only a LUA.

I am just wondering why Borland would want to change Microsoft's License Compiler into a License DLL. Doesn't Borland know that it can late bind into LC.EXE just as easily as LC.DLL by virtue of the CLR assembly probing algorithm coupled with reflection? All you need to do is to specify the type name correctly. But perhaps Borland did an early binding in their lab. Not very elegant in my mind. Is it also ethical to rename a Microsoft executable?

Anyway, realizing BDS is trying to do the impossible, I gave it a helping hand so as to go past this hurdle.

2) Next that showed up on the radar screen was that BDS trying to change the value of this registration item:
HKCR\TypeLib\{F939BACD-3FD5-437A-833F-BA3535A45966}\a.0\0\Win32\(Default)

Of cource this is another futile exercise. I am not allow to write to HKCR! But this issue did not seem to both BDS.

3) Next, with the IDE fully up, I began to test it. It could easily debug a VCL.Win32 HelloWorld application. But a different story when I tried to debug a VCL.Net application. As soon as I pressed the Run menu, it popped up an assertion failure dialog box (could Borland been shipping debug version of their C++ packages?)

Like this:

My machine, prior to installing D2006, has every version of .Net Frameworks installed, include 3.5 as well as all version of Visual Studio. .Net Framework are by design to support side-by-side installation and hence should be able to live in harmony. Pressing OK brought BDS crashing down instantly.

A search of Google brought me into contact with a report of this problem in Borland's Developers Network's Quality Central. You can find my rather less drastic and more .Net correct solution to this problem posted as a reply to the brutal solution offered on the Quality Central.

After the simple addition of a .Net Config file to rmtdbg100.exe, the IDE can then debug .Net application.

Incidentally, if you are building .Net COM component to be used in say Excel, make sure you give Excel an application config file to set the supported run time, otherwise, your component may fail to load because one that loads before you could change the CLR version to become incompatible with yours.

4) These operations did not cause any problem but were observed in the tools. BDS seemed to demand Generic Read/Generic Write whenever it was opening .Net system dcpil, such as System.Xml.Dcpil, etc. When the access was denied for obvious reasons, it then tried to open with Generic Read.

These operations look very strange and highly inefficient. It should demand the lease privilege particularly in a non-installation situation. Very strange. Not only BDS opening DCPIL files like this but BDS did this with other DLLs as well.

5) BDS had trouble executing in an Adminstrative Console. This is one that is constructed from RunAs command with an embedded /Netonly to provide network connectivity. It fired up fine by using the "Run As" from its short cut and from a cmd.

So far, the reason for this misbeviour has eluded me. Since this is the only program that fail to run in my Administrator console, it is not a big deal. One day, the penny will drop and so stay tune.

The most bizarre installation I have ever experienced

I have been using and beta testing Visual Studio since it was 1.x and with the release of VS2008 VSTS Beta 2 some months ago, I am obviously eager to try to install it on my machines.

I must say, this has to be one of the hardest and toughest VS beta installation that I have dealt with so far. As you will see that I have to engage one of the weirdest and most bizarre techniques to finally crack the installation.

I must admit I had lots of very alpha .Net framework 3 stuff installed on my previous machine. Hence, I would expect the VS2008 Beta 2 to fail to install. Even after I'd cleaned up all those crumbs, it still failed. But I was shocked when this happened to me on a new machine with only released version of .Net Framework on it.

Anyway, when I performed the normal VS2008 Beta 2 set up on a machine with VS2003 and VS2005 VSTS, the process proceeded normally but hung up on the installation of .Net Framework 3.5. The presence of those released products should not matter.

After several attempts and probing the process, I decided to try to install just the .Net Framework 3.5 alone. This installation proceeded to about 80% and then hung.

Searching the net looking for similar symptom, I came across this one that described installation problem behind an authenticated proxy, a situation similar to mine. Armed with information from Aaron's blog, I ventured into the installation log to try to discover what's happening.

The installation appeared to be hung on trying to access a web site as reported in the last 3 lines of the file dd_dotnetfx35Install.txt and they are reproduced here:
[10/15/07,16:41:18] Setup.exe: GetGlobalCustomProperty - Property:   {8297A38B-6431-4F1D-9F6E-C3D371CEA383} - PropertyName: WebSetup - Value: 1
[10/15/07,16:41:18] VS Scenario: Checking if new setup is available. Url=
[10/15/07,16:41:19] VS Scenario: http://go.microsoft.com/fwlink/?LinkId=91778&clcid=0x409
Armed with the weird suggestion of unplugging the Ethernet cable from the wall, cleaning up all the crumbs (those folders in C:\ with a GUID as a name) and rebooting the machine to ensure no likely installation was still running, I decided to give this a try.

First I tried the .Net Framework 3.5 installation and this flied past the furthest part I had ventured. So my confidence was boosted.

Then I decided to cancel that installation to proceed with the real VS2008 Beta 2 set up. That also went flawlessly to a no-error completion!!

What a bizarre experience. Don't know why an installation program wanted to access a site. Furthermore, I could access that site from IE and that caused me to initially doubt that could be the reason why the installation was hung. But that seemed to be the stumbling block.

Unusual and Bizarre.

Monday, October 8, 2007

Don't dispair if your e-mail does not bring about the desirable result

Recently an article from NYTimes caught my attention reporting that e-mail is often being mis-read by the recipient. This helps me to understand some of the issues I have with e-mail, particularly non-social ones.

The author discovers recent research points out the reason why e-mail is often mis-read is:
e-mail can be emotionally impoverished when it comes to nonverbal messages that add nuance and valence to our words. The typed words are denuded of the rich emotional context we convey in person or over the phone.
[...]
Still, if we rely solely on e-mail at work, the absence of a channel for the brain’s emotional circuitry carries risks. In an article to be published next year in the Academy of Management Review, Kristin Byron, an assistant professor of management at Syracuse University’s Whitman School of Management, finds that e-mail generally increases the likelihood of conflict and miscommunication.

One reason for this is that we tend to misinterpret positive e-mail messages as more neutral, and neutral ones as more negative, than the sender intended. Even jokes are rated as less funny by recipients than by senders.
[...]
On the upside, the familiarity that develops between sender and receiver can help to reduce these problems, according to findings by Joseph Walther, a professor of communication and telecommunication at Michigan State University. People who know each other well, it turns out, are less likely to have these misunderstandings online.
One way to overcome this as proposed by Professor Shirky, an adjunct professor in New York University’s interactive telecommunications program is:
a “banyan model,” after the Asian tree that puts down roots from its branches.

In this approach, he said, “you put down little roots of face-to-face contact everywhere, to strategically augment electronic communications.”

A final note from Professor Shirky:
“social software” like e-mail “is not better than face-to-face contact; it’s only better than nothing.”
So pick up the phone or walk around the cubicle.

Monday, September 24, 2007

IE7 Vs Firefox European survey

Two recent reports released by Web Browser survey company report some very encouraging signs that Firefox has actually pulled ahead of IE7 in Europe.

In this report, they have identified the adoption of IE7 amongst the IE users is only 33.9% versus 83.2% of FF2 in the group of Firefox users. This is interesting showing a lack of endorsement of IE7, which was touted as more secure relative to the vulnerable IE6, in IE users group .

Furthermore, it is interesting to find in significant number of European countries, more people are actually using FF2 than IE7.

The other report shows a relentless increase of FF2 market share touching a shade below 28% across Europe. It also reported that Slovenia and Finland has passed 45% mark.

Good show. Slowly chipping away the dominance of IE.

DRM - never works and a DRM-Free world is being herald in.

DRM is one of those misguided attempts to prevent piracy, developed based on mythical and unsound assumptions, and ending up hurting the consumers. This article refuted many myths propagated by these Music Industries and DRM vendors.

In the article, according to EFF's
In fact, argues Schultz, DRM drives some would-be paying customers to the music black market, because, to date, it's the only place where you can obtain music downloads that you can use without constraints.
A myth that is often used to justify the use of DRM is that the artists being ripped off by these file-sharers. According to this article:
Recording artists won't necessarily suffer in a no-DRM world. These are the struggling musicians who supposedly would be playing their guitars for tips in the subway, in the doomsday scenario, if music were distributed DRM-free. For them, however, the move to a DRM-free world is either good news or irrelevant. It may mean fewer sales for the top moneymakers, but the majority of recordings—85 percent according to the RIAA—don't generate enough revenue to cover their costs.
According to Todd Rundgren, a recording artist
is that artists don't see money from their recordings; we capitalize on music we have recorded by going out and performing live. It is actually more worthwhile to give your music away—and make it up in terms of ticket sales.

[...]

If it takes me a year to sell a million records and I made $1 million in royalties from that, I'd make that much in a week or so if I toured
The recent appearance of sites like http://www.gbox.com selling DRM-free music is a welcoming sign.

Friday, September 21, 2007

Evidence piracy helps record companies

The release of the e-mail from MediaDefenders to the Internet - it must be the sweetest revenge for file sharers - reveals evidence that piracy is actually helping record company.

Tuesday, September 18, 2007

Piracy is not all one way - it also benefits consumers

I have been tracking this argument that piracy is bad and hurting companies for quite some time. Evidence is now available showing that some form of piracy, others calling this in a less offensive term, Fair Use, is actually good for the companies. This is not surprising at all.

I do not have solid evidence but from experience, the loss of revenue attributed to piracy is probably money well spent than big budget advertising campaigns and trade shows. What benefit could one derive from a week's trade show? Giving someone a few copies of software is probably money well spent in spreading the words of it.

Piracy also has demonstrable benefit to consumers. The latest one is forcing Microsoft to cut the price of their Office suite for students and according to Microsoft
"It is also part of our drive to address piracy issues," Microsoft Australia education marketing manager Donna Magauran said.
Piracy acts like a ceiling setting a upper limit which a company can charge and that it could be tolerated by the community. Without the hackers, will Microsoft cut the price of Vista to 1/3 of the original price in China?

Lately, DRM is on the way out because it is offensive to customers and giving them a bad taste.

Sunday, September 9, 2007

Portable devices preinstalled with Malware from manufacturers

Today, I was shown two portable devices - one was a Toshiba 4G memory drive and the other one was a portable ruggerdized hard drive made by I-O Data. On the surface, none of them is found offensive until you insert them into your machine. I was asked to get rid of the programs spawned by these device upon insertion.

This request was a warning to me of potential attack using autorun and so I examined it using my machine that has autorun permanently disabled for security reason, particular since Sony used this technique to infect user's machine before the user had any chance to decline installation of any software.

After inserting the Toshiba into my machine, it took up 2 drives letters. I was asked to eliminate the annoying programs that started automatically when they are inserted.

This is a good reason why you should disabled autorun permanently on all drives because none of these annoying malware runs up on my machine.

The Disk Manager gave this away. One drive letter was consumed by CDFS and the materials on this drive gave this away as being from U3.

The second partition was just an ordinary FAT partition. It is not U3 that I have found offensive.

The second device was a Japanese made 12G rugged portable hard drive. Once again it behaved like that of the Toshiba, except that it did not have English instructions with this devices. They are all in Japanese. Once again one partition containing their software was packaged in a CDFS.

What I have found offensive with this kind of device is their manufacturers' arrogance and dictatorial attitude in not asking their user if they want to configure the devices in that manner.

Their behavior is identical to that used by Sony Rootkit attack in not seeking user' consent in loading up all these software, no matter how useful the manufacturers believe in. Thankfully the U3 had provided a program to eliminate the u3 partition on the device and I quickly used it to get rid of that rubbish. But it did not come with that CDFS partition. I had to download from their site. The I-O Data was less friendly.

The only way to treat that kind of device is to send it back to the manufacturer for a refund. Don't touch it and consider them as being malware infected.

Anyone considering buying any portable device should examine the product description to determine if they are infected with this kind of anti-customer malware. If I buy a drive, a hard drive or memory drive, I want to format and partition it in any manner I want. Not forced upon by the manufacturer.

Sony is another company caught recently with this kind of stupid acts with their Microvault.

Because I turned off the Autorun, I minimized the attack to only losing a drive letter to that partition that was loaded as a CDFS. None of the Malware was started.

For those that has the default settings on allowing Autorun, the U3 will be automatically loaded on the Toshiba device because the CDFS has the autrun.inf.

On the I-O Data Portable device, it can be more damaging had I allowed it run the Autorun.inf. It would have loaded 3 programs: AutoCRD.exe, and two others as well as some DLL. All without seeking users consent.

Since these companies are so rude to their customers, people should avoid buying this kind of rubbish until they treat their users with respect. In the mean time turn off Autorun permanently.

If anyone knows of any general software to delete CDFS on portable devices, please let me know.

Tuesday, August 28, 2007

Unhealthy obsession with using executables - an example of Golden Hammer Anti-pattern

When one is obsessed with anything, it will lead to bad things - in life and in software development.

This is a real-life example of Golden Hammer Anti-Pattern [1]. The characteristic of this anti-pattern is the tell-tale signs of the blind applications of a technique without question of their validity and suitability. The story below will illustrate this concept and the harmful effect that not only has on the product but also the company culture.

This is a story of a product and suffice to say that it is a serious business application. The product has set out with a form of architecture and has for a while religious following it. However, this reliance on executable, most like came from Unix background, soon become an obsession developed into a Golden Hammer Anit-Pattern. Everything has to be an executable. COM is then used to support automation allowing one executable to incorporate functionality that is packaged in another executable.

To be fair, the product does have DLL's but they are not inproc server. Normally using COM local server is not a bad thing but when the organization is trying to put dialog boxes in a COM local server and trying to maintain the modality across the process boundary, the fool-hardiness of such obsession becomes obvious.

This obsession breeds bad programming practice. Since these local server are registered to be single-use, the developers blatantly abuse the use of global variables. Since there is no code review or mentorship, this abuse becomes cancerous making correction relatively expensive and limiting their options.

The obsession is repeated over a number of years and successfully infecting several generations of management and technical leads. No one has raised a question.

The authors [1] are spot on in explaining why this lemming-like behavior is developed:
In many cases, the Golden Hammer is a mismatch for the problem, but minimal effort is devoted to exploring alternative solutions.

This AntiPattern results in the misapplication of a favored tool or concept. Developers and managers are comfortable with an existing approach unwilling to learn and apply one that is better suited.
Further more, the authors suggest the causes of this are:
  • Ignorance, Pride, Narrow-mindedness
  • Reliance on proprietary product features that aren't readily available in other industry products.
Couldn't agree more.

Here are some of the harms in this obsession representing in real term missed opportunities to provide innovative product and to exploit new products/platform:
1) Because everything is packages as COM local server, the product is unable to provide correct user-experience reliably. Often forms packaged in another COM local server fails to sit on top of the 'owner' form (the launcher). Extraordinary amount of effects over the years are expanded on trying to fix this problem ignoring that a simpler solution using in-proc server would immediately fix this courtesy of Microsoft. Pride and Ignorance have a lot to do with the blindness in recognizing this simple solution.

2) Because the process isolation shields the developers of the harmful effect of blatant reliance on global variables, they even use global variable to do the job of the this pointer, an obvious sign of lack of training in OOP.

3) The obsession has produced a culture so blind that when they considered moving to .Net technology, as everyone did, they refused to concede the inappropriateness of the executable model in the .Net architecture. As most .Net developer would tell you that you cannot develop COM local server using .Net languages but you can for in-proc server. "Igorance" as the authors suggest the cause of this disease, blinding them so badly they ignorantly believing .Net Remoting is a replacement of COM local server that needs no registration.

Instead of taking time out "to exploring alternative solutions", they develop a technique to bend the .Net framework to maintain their obsessive behavior using .Net Remoting. This is chalk-and-cheese difference but to them this is brilliant.

The end results are excessive consumption of memory, slow start up etc. producing a laughable implementation that is neither .Net architecture nor Unmanaged COM local server one. Management did not even thought of the inappropriateness of their flawed architecture. Poor customers.

Had they been less blatant in using global variables, they could have easily transformed their code from an executable model to one using assemblies thus giving them registration-free deployment without being obese.

The authors [1] are perfectly in their observation and suggested refactor technique:
Philosophically, an organization needs to develop a commitment to an exploration of new technologies. Without such commitment, the lurking danger of over reliance on a specific technology or vendor tool set exists. This solution requires a two-pronged approach: A greater commitment by management in the professional development of their developers, along with a development strategy that requires explicit software boundaries to enable technology migration.

[snip]

In addition, software developers need to be up to date on technology trends, both within the organization's domain and in the software industry at large.... They can also form book study clubs to track and discuss new publications that describe innovative approaches to software development. In practice, we have found the book club paradigm to be a very effective way to exchange ideas and new approaches....

[snip]

... is to encourage the hiring of people from different areas and from different backgrounds...

Finally, management must actively invest in the professional development of the software developers, as well as reward developers who take initiative in improving their own work.
So true.



[1] "Anti-Patterns - Refactoring software, Architectures, and Projects in Crisis" by William J. Brown, Raphael C. Malveau, Hays W McCormick III, Thomas J Mowbray

Sunday, August 19, 2007

CBA Netbank UI gone from bad to worse

Further to my previous blog on the need to test UI on a number of Desktop settings, Commonwealth Bank of Australia's Internet Bank has just rolled out a new UI.

Sad to say that it has gone from bad to worse. See this layout distortion on 120DPI in IE6:
This is bad. The placements of all the controls are all over the places. When the same page is rendered in Firefox 2, the page layout is good compared to this.

Wooden spoon prize for CBA's Web developer.

Thursday, August 16, 2007

Test your UI with a range of desktop settings

It is getting more and more common to see badly written UI that cannot handle different desktop settings. One reason is that most so called UI text book does not talk about DPI (Dots Per Inch) display settings.

Everyone assumes that it is always the default - 96 DPI. Sorry I have news for you. When you write a Windows program, there are certain things that belong to the users and there are things within your control. One of the things that is beyond your control is Desktop display settings but the operating system has functions to allow you to discover them and to react accordingly. The last bit is often left out by the programmer who then assumes the defaults or the same settings as their desktop.

To change this setting in XP, just bring up the Display applet in the Control panel, go to the Settings tab, then press Advanced button. In the general tab you can change the DPI settings. Many do not even know about this.

My development machine is always set to be different from the standard to catch violators - myself included. It can also work the other way - a program developed with 120DPI will not display properly in 96DPI environment.

So you need to test both and the best way is to use a Virtual Machine. Fortunately WinForm in .Net 2 has a built in facility to adjust for these.

Below are some bad examples.

From the internet banking department of one of the largest banks in Australia - Commonwealth Bank of Australia:When the 'menu' is dropped down, the distortion is more pronounced:

The bank's did not actually leave their customers in the dark. Here is a cop out from the bank:

Why do the navigation tabs within NetBank (i.e. View Accounts, Transfer Money, Bills, Admin & Services) appear to be so large or small?

This could relate to the Dots Per Inch (DPI) setting on your computer.

The normal DPI setting is 96 DPI. If you are using Windows XP the best way to check your DPI setting is as follows:

  • Open Display icon in your control panel;
  • On the Settings tab, click Advanced
  • On the General tab, in the DPI setting list, click the DPI setting you want to use.
This is a real cop out because such demand is not necessary as million of other web pages can handle this issue without any problem. I suggest that it is their programmers' lacking the skill to deal with this. One of their subsidiaries' web page handles this perfectly.

On a smaller scale, the popular NantAddin for Visual Studio demonstrates a very commonly made mistake:

The almost disappeared OK and Cancel buttons are marked in red rectangle.

Friday, August 10, 2007

Rewards for software piracy

I have never subscribed to the excuse that software piracy is harmful and there appears to have research to support this view.

Further proof of piracy has not hurt Microsoft and indeed has recently rewarded the country that has pirated most of their software with a 2/3 price reduction for the price of Vista.

I guess it is like paying them 2/3 of the price of Vista so that Microsoft can count them as their users (you really can't count pirated users as users can you?) to further increase the Vista population.

So in many ways, it is kind of like advertisement budget and is definitely more effective than putting advertisement on the back of shopping dockets or cereal boxes.

In the mean time those countries, like UK, US and Australia, that perceive piracy bad are penalized with having to pay full price. In some case, other country is forced to pay more.

Who says piracy is bad! It is damn good for the consumers. I am just wondering what kind of messages does does this instance projects?

Friday, August 3, 2007

AFR's Strange subscription model.

The premium Australian financial newspaper is called The Australian Financial Review (AFR). Many years ago in the dot com era, it began its entry into digital web media world by having a Web site that reader can read articles.

AFR has a very crazy subscription model some would argue none at all. In both WSJ or Economist they have subscription different rates for printed and electronic form. The costs between the two are quite substantial. However, AFR does not have any differentiation. Electronic access is only available to those that have a minimum of 5-days paper subscriptions annually. As a result, I chose to subscribe to WSJ, which has far more informations of world class quality to read and definitely cheaper. AFR contains a lot of reprints of WSJ's article and saving heaps. This was quite a few years ago.

Apparently things have not changed much and its seems to have gone worse with the upheavals:

Critics have been quick to slam afr.com as slow, irredeemably clumsy, clunky, hard to navigate, inconvenient, expensive, a ploy to charge more for less and a financial disaster.

And if that were not enough, the application has been branded, launched, then rebranded and relaunched in the past 10 months.

Well said and spot on. When I was an AFR subscriber, it was terrible and the organisation is extremely mean as compared to WSJ in archiving their articles. This has not changed either and seems to have gone worse:
To access any services beyond the smattering of open news stories, you need to become a subscriber. A basic subscription involves registration, which allows access to news and investment guides provided from 29 sources and entitles the subscriber to access archived stories at $3 each.

Step up to the Essential level and you'll pay $25 a month (free if you're a subscriber to the print edition), which provides news and investment services, plus 10 free archive accesses. In other words, you'll save $5 on the $30 cost of 10 trips to the library.

The Markets level costs $50 a month and gives 25 archive credits: $75 worth of stories for $50.

The Advanced level costs $150 a month and provides access to the full suite of services: news, market analysis, 80 archive credits (worth $240 a month), market data, economic statistics, industry snapshots, company reports, watch lists, portfolios, charting and so on. Other services at add-on prices can take the total cost of the package to $288 a month or $3456 a year.

It ain't cheap. But it is the only place, other than the printed version, that allows you to access AFR stories. The company last year withdrew from Media Monitors and other copy-sharing services, arguing that it received no compensation for its work.
It does not even tell you that AFR will archive articles in 30 days as compared to say 3 months in WSJ. Once archived, you have to pay to read it! People show boycott this kind of services as it is clearly attempting to fleece their readers. It will be good to see Rupert Murdoch coming in to give Australian reader a choice. Monopoly breeds this kind of contempts for consumers.

Is this plain stupidity or laziness or both?

Recent release of Apple iPhone excites the hackers community so much so that it has now discovered that:

At the top of the list, the device's operating system runs every application with administrator privileges, according to Miller and his cohorts at Independent Security Evaluators, turning a simple breach of any application into a breach of the system. In addition, both the iPhone's stack and heap are executable and the layout of programs in memory are not randomized -- two factors that make exploitation of any vulnerabilities much easier, he said.

"I think people are letting Apple off easy," Miller said. "You need to design the iPhone so that even if there is a problem in Safari, people don't completely take over your phone."

Gee, Apple has unwittingly given the hackers a great helping hand. It is definitely easy to program for iPhone because one needs not be concerned with security.

Saturday, July 28, 2007

The debate of private field convention in .Net and Microsoft's convention

The other day over lunch a few of my mates got together to catch up. One of the topics that came up was the private member naming convention in .Net.

One of them suggested it to be prefixed by a '_' character. I then threw in a question saying something like this: "if you propose to use _, why not m_?".

My mate told me that that is Hungarian Notation and no place in the .Net. I then asked what Hungarian notation is that: why is "m_" signifying for "member var" any worse or offensive than "_". We did not came to any agreement although I did give a brief history of private member naming convention.

I favor the non-prefix convention for private member for several reasons:
1) it is consistent and looks even better in WinForm programming. If you use _, you have to make sure you name your control accordingly. I guess same for m_. Then if you have an event, say a click event, the event handler looks like this:
private void _myButton_Click(....)

Personally, I do not like seeing leading underscore.
myButton_Click(...)

reads better.

2) In coding, the statements are prefixed with leading underscore like this:
hasFound = false;

instead without underscore like this:
hasFound = false;
or
this.hasFound = false;

True, it has more typing and litter by this. When using intellisense, the moment you type "this.", the intellisense pops up with the member data. I guess you get the same treatment for _ as well as m_.

Also true is if you have a long function (note: code smell), you may be wondering if that is an auto-variable if you do not have any prefix. This is particularly true in those poorly crafted ridiculous C++/CLI standard that tries to retain the C heritage for no good reason other than failure to let go.

Anyway, I thought my convention with the support of zillion lines of code, which does not arouse FxCop ver 1.35.51212.0, is the right convention until I use the static analyzer in VSTS. The analyzer is a lot stricter than FxCop and is complaining usage like this:
class ClassFieldVarSameAsParameter
{
int someValue;

public ClassFieldVarSameAsParameter( int someValue ) // This is fine
{
this.someValue = someValue;
}

public int SetValueTo ( int someValue ) // CA1500
{
int oldValue = this.someValue;
this.someValue = someValue;
return oldValue;
}
}
The SetValueTo() function generates the warning CA1500 - VariableNamesShouldNotMatchFieldNames message while the constructor does not.

That really turns my convention, and I am sure many as well, upside down. This discovery is a great shock to the system because this usage has been almost universal from day one. It is also universally used in designers or code generators within VS, including WinForm and dataset designer.

Should we use custom dictionary to rule this out? Not too sure and I hate exceptions.

It appears that the convention used by Microsoft is "m_" internally and this may explain why this rule is introduced into the static analyzer.

May be this signals the end of the debate?

PostScript

For those wanting to retain the no-prefix convention, one sensible way to remove the CA1500 warning in the VSTS static analyzer is to adopt the following convention.

For those situations where CA1500 are generated, append an underscore suffix. Since this usage is localized to a particular function in the parameter names, the ugliness of the underscore is not so common. If the suffix is unacceptable, people can use alternate names for the parameter.

The underscore suffix does not confuse those that have already adopted the underscore prefix. This then allows member variables to have no prefix.

However, renaming parameter name of any externally visible method is a breaking change.

Monday, July 9, 2007

Delphi.Net assembly loading inefficiency

Further to my reporting on the poor performance and scalability of VCL.Net, Delphi.Net also has assembly loading inefficiency that is opposite to what can be produced with .Net languages. This message is to describe this cost and how one could use design pattern to ease the pain - not totally removing it.

Consider the following UML diagram describing a very simplified Delphi program:

This is a very common approach. So in the unit that contains TForm2, you will have a uses statement mentioning the unit for MyClass3 and so on.

The following sequence diagram describes what happens when one double clicks on TestClient.exe to launch it from Windows Explorer:
In other words, as soon as the DPR is loaded, it starts to load not only the MyDemo3.dll contains the class that TForm2 requires but it transverses right down to the lowest level, loading each DLL as it goes. At each loading, it executes any code that is in the initialization section of the unit. Delphi.Net compiler translates each unit into a static class whose name is the name of the unit protected inside a namespace whose name is formed with the unit name plus a suffix '.Units'. Any code defined in the initialization section is placed in the static constructor of this class.

This kind of loading is opposite to .Net programs developed with other languages, such as VB.Net or C#. Programs produced by those language only load the DLLs until code in those assemblies are executed and the JIT compiler only compiles what is needed.

In a non-trivial solution, Delphi.Net will produce a solution that literally loads up all the DLL, jits the classes that are in the unit namespace, run their static constructor code. In other words an awfully inefficient loading engine and solution and the time taken to carry out all these actions can become very significant portion of the start up cost.

This is the code in the static constructor of MyClass3.Units.MyClass3:
static MyClass3()
{
RuntimeHelpers.RunClassConstructor(System);
RuntimeHelpers.RunClassConstructor((RuntimeTypeHandle) MyClass2);
RuntimeHelpers.RunClassConstructor((RuntimeTypeHandle) MyClass1);
MyClass3();
}
This is the source of where all the loading begins, the DPR code, TestClient.Units.TestClient's static constructor:
static TestClient()
{
RuntimeHelpers.RunClassConstructor(System);
RuntimeHelpers.RunClassConstructor(SysUtils);
RuntimeHelpers.RunClassConstructor(Forms);
RuntimeHelpers.RunClassConstructor((RuntimeTypeHandle) MainForm);
}
The last line kicks off the loading. When the MainForm.Units.MainForm's static constructor is executed, it loads that for MyClass3, and so on.

Here is System.Diagnostic.Debug.WriteLine() traces that I've placed in each unit's initialization section.
[5988] MyClass2.initialization
[5988] MyClass1.initialization
[5988] TMyClass3.initialization
[5988] TForm2.initialization
So beware when you are migrating code from VCL.Win32 to VCL.Net. You need to do extra work in VCL.Net to make your program runs efficiently and this can be a signficant cost or at least more than what CodeGear leads to believe.

While this 'load-just-in-case' attitude is always present in Delphi.Win32 but because it does not require jitting and all those processes, the cost in Win32 is not so significant. Nevertheless, it does defeat DLL delay loading that has been around in Win32 for years. Inefficiency nevertheless!

So how to protect yourself from this wastage? The idea is to apply the Dependency Injection or Inversion principle to loosen the coupling. Using this principle, the naive implementation becomes something like this:
Many who are familiar with COM would recognize this pattern is something they have been using for years. The Publisher.dll is nothing more than the type library carrying DLL.

If you run TestClient.exe up, all you see being loaded is Publisher.dll. You will not see the rest of the DLLs being loaded until you need to create MyClass3. To avoid binding to the definition of MyClass3, you should use System.Activator.CreateInstance() to create MyClass3 and then to use IDemo3 to invoke methods. This is exactly the same programming model that is being used in COM, just like what Don Box mentioned in relation to answering the question "Is COM dead?". This is what he said:
COM is many things to many people. To me, COM is a programming model based on integrating components based on type. Period. This was COM's primary contribution to the field of component software, and that contribution has changed the way millions of programmers build systems today.
Now, if Delphi.Net/Pascal has assembly visible type, which it does not have, MyClass3 can be marked in such a manner further restricting user from directly binding to MyClass3 resulting in the inefficiency that we are trying to minimize.

Tuesday, July 3, 2007

A positive outcome for Delphi Win32 COM developers

With the release of the much heralded "Delphi 2007 for Win32" by a company with a new name, finally it appears pure mathematical logic has won out over ego and pigheadedness that have prevailed for the past 8 years or so.

The issue with Delphi COM local server problem that I've blogged so passionately has finally be fixed in "Delphi 2007 for Win32". This problem exists in Delphi 3 and Borland over the years has steadfastly refused to fix it or to even acknowledge that it is a bug. May be it is Vista in particular with the UAC forcing them to fix it. I dare not claim credit for this turn around despite my submission through the normal channel and the numerous brush off.

The fix is almost identical to what I've described here. The important things to note is that they should now obey their documentation in relation to the local server command line switches and that it will not bother to re-register it after a successful COM activation. The latter is the most stupid of all. Programming is pure logic.

There are implications for many who are not fluent with COM local server and may have used this 'bug' to register COM servers without the switch. Or others that are aware of this problem but have cooked up their special work around that may now be rendered ineffective or may even fail.

So they have to check with their usage to determine if they now conform with the general COM local server registration usage.

Monday, July 2, 2007

Comodo Anti-Virus Ver 2 beta not suitable for Tablet PC

Over the weekend, I decided to trade my trust AVG 7.5 with a new boy on the block called the Comodo Anti-Virus Version 2 beta, which is also free.

Anti-Virus these days in my mind has kind of grown beyond what they are suppose to do - catching virus. But this thing seems to do more - HIPS (Host Intrusion Protection System). Isn't that is your Comodo Firewall's role?

Anyway, this tool is not suitable for Tablet PC because it prevents the TIP (Tablet Input Panel) from coming up. If you cannot use TIP, you may as well be throwing the tablet away. It is probably being too aggressive equating to being dumb. It keeps asking me to allow and disallow things that Comodo Firewall is fully aware as harmless. Kind of like the Mac Vs Vista advertisement poking fun at the Vista UAC. Both Comodo Anti-Virus and Firewall are extremely dumb!

After trying in vain to get the TIP up, the next best thing was to uninstall it and put my trusty AVG back on. So good bye Comodo Anti-Virus.

Thursday, June 28, 2007

Changing pricing model to survive for software product companies

Cusumano[1] observes that software company changing their pricing model to survive:
As prices of software products have fallen, the “99% of zero is zero” rule that I wrote about in The Business of Software has taken effect and forced many software product companies to close or sell to some larger competitor. Our database at MIT suggests we have “lost” nearly half of the public software product companies since their number peaked in 1997 at over 300. Product companies that have remained in the software business have had to adapt their strategies as well as pricing and delivery models. Product companies can no longer afford to spend enormous sums of money on R&D and marketing to build and sell features that customers do not want or use.

[1] "The Changing Labyrinth of Software Pricing" by Michael A. Cusumano, pp 19-22, CACM Vol 50, No. 7, 2007.

Is lack of experience a good reason to discard a technique or technology?

Recently, I was talking to a friend who was in a consulting business and the conversation involved some architecture design he was considering.

After taking in his requirements and his approach, I gingerly offered my view on this. I proposed a solution to use a technology that is what the world is adopting to solve his kind of problem. He listened and then he rejected it. I thought that was odd and inquired about the rationale behind it.

He told me that he and his company did not have this kind of expertise but did not want to tell the client. Doing so would reveal its technical weakness as he wanted his client to depend on his company's technical expertise. Kind of weird logic to me.

Not trying to be pushy but genuinely wanting to help him to move with the flow, I told him that he would be making a grave decision because he did not have the expertise to say with high degree of certainty that that technology was right or wrong for his problem. He therefore needed to seek assistance from those who possessed this kind of knowledge.

We did not reach any conclusion but I walked away totally disgusted of how unethical one could be in IT industry. Not giving the client the right solution because the one who was supposed to help the client to come up with a solution does not possess that kind of knowledge.

Image a GP suspecting the patient having brain tumor not recommending brain surgery because he/she does not know the surgery or does not know how to diagnose the patient. In medical world, the GP would without hesitation refers the patient to a specialist.

Excel/VBA Error with .Net COM component

Recently, I have been asked to demonstrate how to package some good old Excel/VBA script into a COM component that allows the organization to reuse the code. Sharing bas file is problematic.

So I use VS2005 VB.Net to develop essentially a hello world COM component. It works fine on my machine. But when I tried to execute the following VBA code:
Sub Test()
Dim x as MyComClass
Set x = MyComClass ' L1
x.SayHi( "Paul" )
End Sub
on another machine with identical version of Excel 2000, the above code in line L1 fails with a HRESULT of 0x80070002. Strange I thought! It is complaining that it cannot find the assembly.

My initial reaction is to perform a RegAsm /codeBase. This still fails. Then I install it into the GAC and it still fails.

That's odd. Finally, firing up Process Explorer reveals the reason.

It turns out that some components are loaded before me that required CLR1 and since only one version of CLR is permitted per process, my demo COM component, a CLR2 assembly, is not compatible with CLR1. Hence it is reporting the correct error message - file not found as CLR1 cannot understand the CLR2 meta data.

Once this becomes obvious to me, the logical thing to do is to give Excel.exe its own config file which specify the supported runtime to be of version 2. This forces those CLR1 components to run in CLR2.

The morale of the story is that: if you can considering writing COM component in .Net, it is wise to write it in CLR1 because in this way, it can be pulled up to CLR2 automatically and you do not need an application configuration file to do that.

Wednesday, June 27, 2007

Two very poorly understood and appropriately staffed roles in software house

Many years ago, I met a young software engineer who was considering whether or not to take up a role as a software installation writer in a software company, which was classified in the top largest private software company of a country.

Since he was worried that it would be a mundane job with probably no future into development and to improve his knowledge in Windows, I had a long conversation with him pointing out the following key aspects:
  1. It is a role poorly understood and poorly rewarded by management
  2. It is a role that holds the make or break in the success of delivering a piece of software into customer's machine.
  3. Its creation is the first point of contact with the customer and the customer's positive or negative impression of the software largely influenced by this piece of software.
  4. The person needs more skill, thus my early comment on reward, than developers of the software, as he has to ensure he knows all the quarks and weird configurations out there. Often he has no way of knowing with precision. More importantly he needs skill to detect them to plan evasive actions. On top of that of course he also needs to be aware of the make up of the software he is installing, its requirement of OS support, its hardware/software dependency, if any.
  5. He also needs to know the right way to install the software and uninstall it cleanly and in such a way not violating the Windows security.
  6. He also needs to be a competent developer so that he can supplement the installer script. Often he needs to know C/C++ as often times, that is all he can rely upon in that very instance.
He was initially skeptical of my assertion because that was the general, and still is, trend but he did take the job. After a few years on this role, he agreed with me wholeheartedly. Over the years, I have observed the same misunderstanding being repeated time and time. The manager always thinks that it is a matter of pressing a few buttons in say InstallShield to crank out the installation package so simple that it can be given to the most junior staff. Sure, some simple application would only require simple installation package. But as the sophistication grows, such naive observation at time can prove fatal.

This is the first role that I have always maintained, a view formed from the trenches, that is extremely poorly understood and appreciated by management. I tip my hat to those installer script writers whose products flawlessly guided me through, particularly OS installer.

The second role is the Build Master. I have always thought of this but do not have the same strong convection as the Installation Writer. But the book "The Build Master - Microsoft's Software Configuration Management Best Practices" by Vincent Maraia, finally tips me over.

As Vincent says:
Many project and program managers think that the actual building of a project is a pretty trivial. Their attitude is that they can simply have the developer throw his code over the wall and hire someone to press a Build button, and everything will be fine.
[...]
I recommend that you consider the build process a piece of software that you regularly revise and deploy throughout your product team.
Many projects that I have worked on or reviewed do not even have an automated project build process. Different parts of the project are built in various machines in ways no one knows what settings are used. How can such ad hoc and haphazard arrangement manages to produce consistent and reliable product? It is just sheer luck.

Once again, management, as Vincent said in his book, does not understand the skill and reason for the Build Master.

Saturday, June 23, 2007

Outlook Clone - Evolution in Windows at last

Microsoft's Outlook has been the kind of the defacto PIM/Mail Client for a long time with no challenger insight.

The reasons why it is so popular are:
  • It is a damn good Internet Mail Client.
  • A Microsoft Exchange Server Client. Not too many program can touch the Exchange Server
  • It has extremely good synchronisation ability with Pocket PC - naturally as they come from the same stable.
  • It is a good organiser for Calendar activities and tasks.
Well, Outlook's sole dominance in this arena is now being challenged by a program called Evolution, which has been around in Linux for a few years. Now, it has been ported to Windows.

I am always on the look out for ways of not paying Microsoft therefore I obviously do not pass up the challenge.

The installation in XP is rather smooth and uneventful. What is disturbing is the lack of any entry in the Start Menu nor was there any option not to create Start Menu during installation.

After installation in Admin account, I return to RANU to test Evolution for the LUA compliance. Surprisingly, it runs fine in LUA.

The installer appears to have forgotten to create the start menu. Hence to start Evolution, you need to run the Evolution.cmd in the Evolution directory. This starts a Command console briefly while it starts the StartX.exe which then runs the Step2.cmd. The comment in Step2.cmd is kind of worrying: Yes, I know this takes a while. Sorry. So be patient.

The first time it starts in an account, it will present you with the customary wizard to set up the POP account. The first thing an Outlook user will notice is the difference in user interfaces. The MS Outlook is slick and Windows-ish naturally, while the clone has the distinct alien look, but still functional.

Once that task is completed, the main screen is up. This is where I got lost completely. I was struggling to find the "Send/Receive Mail" button so that I could pull down some mail messages, which I knew were in this account.

After a bit of struggle, I found it under the File menu. But after pressing that and seeing the momentarily displayed progress window, I still could not find the Inbox messages. The WebMail access to this POP account confirmed the disappearance of the test messages. So obviously Evolution must be working.

After some exploration of the menu, I discovered the key ones to set and they are:
  1. Press "View | Layout | Show Side Bar" as this will present you the customary Outlook side bar allowing you to select folders.
  2. Press "View | Layout | Show Toolbar" as this will present you the toolbar.
Once these two settings are set, the operations and appearance of Evolution are uncannily close to Outlook.

It is early the days to proclaim that I will flip over to this but I will definitely monitor its development.

One of the claim of fame of Evolution is that it can hook up to a Microsoft Exchange Server and the Linux people have been using this for that purpose. Since I do not have an Exchange server to test it, I will take their words for it.

There are some Windows Z-order problem and is very typical for program that has forgotten to nominate the correct owner window of the dialog box. For example, when it is prompting for the password of POP account when the current one is incorrect. This sequence of windows is very untidy.

During closing down Evolution 2.8.2, it causes an application error. At the moment, I cannot determine the cause of it but it is repeatable.

Since it is free and some of the roughness is tolerable. The next is to compare this with Thunderbird 2 with the Lightning calendar plug in. So stay tune.

Bug reporting - a lesson CodeGear can learn from Open Source

My recent usage of the GNU Win32 port of Grep resulted in my discovery of a bug and in custom with my practice, I filed a report. What happened next is a lesson that Borland/CodeGear should learn from the Open Source community.

Not only did they not whinged, throwing up excuses, calling their user and bug reporter not understanding their 'feature', they digested my bug report carefully and promptly acknowledged that was a genuine bug. Even when Borland's bug has been proven beyond doubt, yet they still refused to fix it.

Within a short time, they posted me a message advising me a bug fixed version is available. Full mark for that.

Contrasting that with CodeGear's Delphi bug reports I have filed over the years, I have never received this kind of acknowledgment and quick response. More disturbingly CodeGear's Delphi product is NOT FREE and yet providing far worse support than free stuff. Hence, price does not equate to provision of support, not even meager support.

People considering CodeGear software needs to remember this before spending your money.

Saturday, June 9, 2007

Different approaches in handling users' bug report

I have used many software libraries and software packages over the years and whenever I discover something that does not agree with the intended usage, I always take the time and effort to develop the simplest scenario to demonstrate the anomaly. Then I will submit this to the developer for comments. Since I am a developer, I know one cannot always be right and if someone can show you a scenario that can duplicate the anomaly, it is a great help as half the work is done to improve the product.

Below are a number of cases of my fault reporting experiences and the vastly different experiences I have encountered. You be the judge who should deserve the wooden spoon.

Possible Microsoft Enterprise Library bug - Range Validation configuration.

Recently, I have decided to use the Microsoft Enterprise Library 3.0 and in particular the Validation Application Block. During my experimentation with using the Configuration Console, I have discovered the console is not behaving logically. After extensive experimentation, I have formed the opinion that the validation process used in the console is faulty preventing me to save the configuration file and that a work around has been found.

So I fired a comment to the forum responsible for the Ent Lib to seek the expert opinion. With speed commensurates with Internet Time, the moderator of the forum recognized a bug and raise a work item. Full mark for their effort.

Something amiss - two minor releases of Gnu Win32 port of grep behaving differently

I am a frequent user of this great tool called grep that is rooted in Unix. Recently, I had to perform some recursive search using regular expression in some C# files. I found a version of grep in my machine that allowed me to successfully locate the lines with this command line:
 grep -R --include="*.cs" -P "^[ \t]+class" *
I looked at my grep and realized it was slightly dated and wondering if there was more recent one. So I searched Google and found the two sites on SourceForge. The GNU Utilities port to Windows (UnxUtils) and the Win32-versions of GNU tools (GnuWin32); not sure why two. To cut the story short, the UnxUtils contains a old version of grep, 2.4.1, which does not have the --include and -P (the Perl Regular Expression) switches.

In the latest version of GnuWin32 version 2.5.1a-1, it has the grep with the required switches. However, it does not function according to the man page. On that site it also has an earlier release, 2.5.1a, and surprisingly, this version not only has the required switches, they function according to the man page.

Seeing these distinct differences in behavior, I duly submitted a bug report. The information provided must be clear enough to indicate something was wrong and the status had been changed and assigned to someone for further investigation. It could mean my usage of the switches combination is wrong too. In that case, 2.5.1a-1 needs to be correct. If my usage is right, and I am pretty sure I am right, the 2.5.1a-1 needs to be compared to 2.5.1a and rectified accordingly.

Problem in Borland's Delphi library.

I must admit that I am not a long time user of Borland's Delphi programming language but in the last 2 years, I have begun reading code and writing code in Delphi - both the Win32 and Delphi.Net, of course including the VCL.Net.

I am also fortunate that I am surrounded by Delphi experts who can either correct my usage or advises me of this 'Delphi specialties'. They have also fed me many interesting stuff and many I have taken on as a challenge to get them fixed as they are clearly illogical.

Many are definitely bugs while others are of their own creation. A samples can be found in this blog for example, Delphi handles COM Server, or trying to change ECMA CLI standard. There are others that are not reported in my blog.

It is particularly interesting to note that the COM problem was first discovered in Delphi 3 and numerous users tried in vain to have this rectified. The support would preferred to argue at length of the classification of this problem than to discuss the technical issues. Perhaps they were not skillful enough to understand.

My discovery actually unearths a possibility that someone in Borland has accidentally deleted the line testing the start up mode and the code still compiled correctly giving Borland an impression of correctness.

They do not see anything wrong when they are not developing in LUA. Thanks Borland for giving me a real life illustration of the danger for not developing in LUA. This may be the same reason why their TRegistry fails too.

Even putting this aside, my code analysis of their code clearly indicates that Borland's developers do not know what is the exception handler and thus have thrown the wrong exception or vice versa. This allows the application to crash catastrophically during start up.

With only 9 lines in dispute and a well described test scenario, they steadfastly maintained that there was no bug. If that is the case, why would an automation server, that is running as a stand alone, thus not requiring to be registered, fail to run in LUA? When cornered, Borland offered all sorts of stupid 'fixes', like using special registration to HKCU hive.

Stubbornness does not fix code. It only gives developers more reason to look down on this product. It gives me indisputable examples showing the poor quality of the product and more convincing argument in product evaluation or elimination. Developers that following my suggestion to address this issue came out smiling without the need to use unorthodox hacks and even considering the laughable work around from Borland.

My blog has documented many other reports of these kind of encounters. Some of the 'incorrect behaviour' are trying to be justified as features. While others the company could not bother to comment.

To be fair, their developers had acknowledged some of my finding, not documented here, to be more than spot on, exploring areas that they had not considered. But that was as far as it went. Hardly called satisfactorily resolved.

A far cry from other more mature and definitely users friendly companies or communities.

I will definitely be monitoring the Delphi 2007 to see if these issues are addressed. If they cannot even admit fault, the first step in fixing any bug, my bet is that the fault will still be there. So if you were considering upgrade, check these issues out first or else you may be getting the same old thing with a new wrapper!

Friday, June 8, 2007

Class libraries not aware of LUA

[This is a reproduction of a message in my old blog. With the imminent release of Delphi 2007, I am really interested to see if they have addressed this and how.]

Many frameworks have provided class wrapper to deal with Windows Registry access and many, including the ATL's CRegKey and Borland's Delphi TRegistry have a problem in dealling with registry access in LUA (Least-Privilege Users Account).

Not that they do not have facility to deal with this but the default security access rights is set too high making unwary developers that do not develop in LUA falling into the pitfall.

For ATL::CRegKey::Open() it is default to KEY_READ | KEY_WRITE and TRegistry.OpenKey uses as its TRegistry.Access which has a default value of the same.

What makes Borland's class not capable in dealing with LUA failure than that in ATL is the return value of these respective member functions. CRegKey::Open() returns the value from ::RegOpenKeyEx(), the raw API, which Borland returns a Boolean which is true if the API returns ERROR_SUCCESS. In so doing Borland has discarded the vital information that can allow the caller to take appropriate action.

For example, when using default security access value, the CRegKey::Open() returns 0x5 for an existing key in HKLM and this means "Access is denied". If the key does not exist and that you have access rights, the return value is 0x2. In the case of Borland's TRegistry.OpenKey(), all it can return is false to indicate if something is wrong. What's wrong is that the caller is unable to determine through the lack of information. Hence in many of Borland's usage, when it fails to gain access to the HKLM with the default access rights, it will simply assumes that the key does not exist and hence performing the wrong recovery routine!

Given that Borland has already published this interface, it cannot change it. At best, it can add a read-only property to allow caller to query it after the operation but this requires code to change. Oops!!! One such Borland's library function that causing me grieve with LUA is the BDE.DbiInit().

I am just wondering if Borland's developer developing their products in LUA or in Admin accounts. From what I have seen, obviously the former.

VCL Form Vs Win Form performance (old posting)

[This a reproduction of a message in my old blog. It is reproduced here to remind me to retest this when the BDS 2007 is released]


There are plenty of noises on the Internet regarding the performance of Windows Forms (WinForm) Versus Delphi Forms but not too many of them are willing to publish any profiler results to back the claims.
While the results published here are not exhaustive, they are repetitive and are here to compare two aspects:
  • How are Vcl.net, Vcl32 and Win form compared to each other on loading a form?
  • Are they sensitive to the number of controls on a form?

The tool I used to profile them is AQTime4 from AutomatedQA, which can be used to profile, Delphi.Win32, Delphi.Net, C#, C++ (managed or unmanaged).

Two applications are developed. One, the Large App, has 1 tab control with 5 tab pages and they contains buttons (47), combo boxes (20), edit boxes (40), rich edit controls (12), and treeview (16). All together 241 controls.

The other is a lot smaller with 1 tab control 5 tab pages and all up 40 controls of similar types.

These applications were developed in Delphi.Win32 and re-compiled to Delphi.Net and then re-coded to WinForm using either Delphi or C# all using Delphi Studio 2006. All these applications did not contain any code other than what was generated by the Form Designer.

Here are the results for the large app:

Large application with 241 controls
Build Type Method and Children Time (sec)
VCL 32 TForm2::Create() 0.16606
VCL.Net TForm2::ctor() 1.20292
WinForm/C# WinForm::ctor() 0.2645

For smaller applications here are the results:

Small Applications with 40 controls
Build Type Method and Children Time (sec)
VCL.32 [No Debug Symbols]
VCL.Net TForm1::ctor 0.53663
WinForm/Delphi TWinForm12::ctor 0.21640

Profile results on two real applications one using Delphi3 and the other Delphi2006 produced data in agreement with the trend but with a much severe ratio because they contain actual code that increases the load time further.

From these data we can conclude:

  1. VCL.Net is slower than VCL.Win32 and analysis of the call stacks in these two technologies revealed a lot more methods calls were involved in VCL.Net as compared to the lean VCL.32, even for reading form resources.
  2. VCL.Net is also slower than WinForm, regardless of the language used to program it.
  3. VCL.Net is more sensitive to the number of controls on a form as compared with WinForm, which is essentially flat. The ratio of VCL.Net : WinForm is 2.5 : 1 for a form with 40 controls as compared with 4.5 : 1 for one with 241 controls. Hence this means the load time of VCL.Net is sensitive to the number of controls on the form.
  4. The WinForm is relatively insensitive to the number of controls on the form.

So here you are.