Keep quiet and Shoot the messenger and this is a classic Security by Obscurity.
Charlie Miller recently found a security vulnerability in iOS and shows it on YouTube. This has embarrassed Apple.
I am wondering how many 'silent messengers' out there going about merrily exploiting holes without telling Apple. After this episode why would researcher bother to tell Apple, the ungrateful lot.
A site devoted to discussing techniques that promote quality and ethical practices in software development.
Thursday, November 10, 2011
Sunday, October 16, 2011
Win 7 slow file copying to network drive - is it Win7 problem?
I was backing up files using RoboCopy from my Win7 (64-bit) home machine to a network drive mounted on an XP Prof machine and noticed it was awfully slow. A check of the Network Utilization in the Task Manager's Network tab revealed that it was clocking only 2-4% and no one was using the network except me.
When I was reading this thread seeking any solution to my problem, I never believed it was the fault of Win7. Why? When I was doing a similar operation using a Cat5e cable with my laptop, the network utilization was in excess of 98% constantly. Hence I knew it was not Win7, as blamed by the above thread nor my network.
I knew it must be hardware associated with this machine that exhibited the slowness. Thankfully experience told me not to follow the instructions posted on the Internet blindly or else I could end up in a worse shape than I began with.
It turned out the solution was simpler than many would have guessed and it was highly unusual but the signs were always there.
It was a faulty Cat5e that was used to connected this machine to the router. The cable did not exactly not broken. Only the transmit line must be hanging by the thread resulting in slow copy to the server but acceptable performance in copying from the server. What pointed me to this simple solution was the fact that I was using a totally different cable when connecting the laptop to the network.
When I was reading this thread seeking any solution to my problem, I never believed it was the fault of Win7. Why? When I was doing a similar operation using a Cat5e cable with my laptop, the network utilization was in excess of 98% constantly. Hence I knew it was not Win7, as blamed by the above thread nor my network.
I knew it must be hardware associated with this machine that exhibited the slowness. Thankfully experience told me not to follow the instructions posted on the Internet blindly or else I could end up in a worse shape than I began with.
It turned out the solution was simpler than many would have guessed and it was highly unusual but the signs were always there.
It was a faulty Cat5e that was used to connected this machine to the router. The cable did not exactly not broken. Only the transmit line must be hanging by the thread resulting in slow copy to the server but acceptable performance in copying from the server. What pointed me to this simple solution was the fact that I was using a totally different cable when connecting the laptop to the network.
Wednesday, September 14, 2011
Shame on Telstra Text Buddy for failing LUA Test
You think the largest telecommunication company in Australia will develop and release software that works in LUA. Sadly not.
Telstra has a bonus credit feature that allows their mobile pre-paid broadband user credit that they can use to send SMS messages. One of the two ways you can send SMS from the comfort of your computer is to use a Telstra developed program called Text Buddy (version 1.0), which looks like this:
Despite the fact that this program is only compatible with Windows XP, it demands administrator privilege to run. Bad!!!
I bet you any dollar that their developers do not use LUA development and they developed this program in Administrator account.
Telstra has a bonus credit feature that allows their mobile pre-paid broadband user credit that they can use to send SMS messages. One of the two ways you can send SMS from the comfort of your computer is to use a Telstra developed program called Text Buddy (version 1.0), which looks like this:
I bet you any dollar that their developers do not use LUA development and they developed this program in Administrator account.
Thursday, August 25, 2011
Would I recommend NOOK for reading non-fiction materials?
I am grateful for receiving a color NOOK for Christmas and to avoid writing a review of a device with superficial experience, like many reviewers, I have decided to hold off writing a review of this device until I have adequate usage of it. Now more than 6 months of usage reading many technical eBooks I believe I have enough experience to write a review.
This review is from the usage and perspective of a reader of technical materials. Reading technical materials is very different from reading a novel and hence it is important if you are thinking of getting one of these devices, either NOOK or other similar device, make sure you have a good idea of what kind of materials you will be reading. Try it out with a DRM-free technical book.
If you are planning to use it to read technical materials, I will definitely not recommend you to buy a NOOK (color or not) for the reasons I list below. As a previous owner and long time user of a Fujitsu Tablet, I am no strange to this kind of keyboard-less device.
With no apology to Barnes & Nobles, I have not bought one single B&N DRM controlled eBooks for obvious reasons and probably never will. I bought all my DRM-free watermarked eBooks from InformIT, which is a great site for technical eBooks because you can get both ePub and PDF format for the one price; I can put the ePub version in my NOOK and the PDF version in my notebook. That's the wonderful benefit of using DRM-free books. This allows me to test the NOOK and to find which format is the best.
Here are the reasons why I believe a NOOK is a unsuitable reader to handle technical materials:
1) The extremely primitive and incapable user-interface is one of the root cause of problems elaborated below. It makes a mockery of having large storage capability allowing one to carry many books with you when the interface is so primitive that you cannot open several books at the same time, like you would with a physical books on your desk, and returning to the page you left off. This is a severe and fundamental oversight in the design.
In reading a novel, you will not never be reading several at the same time darting from one to another. Hence the ultra primitive user-interface is adequate for that purpose but is totally inadequate for anything else. It is a joy to use my PDF XChange-Viewer to read the PDF format of those eBook on my notebook.
2) Even overlooking the problem mentioned in 1) above and reading just one book (ePub format) is also very frustrating. There is few technical book not containing notes (endnotes, footnotes, or bibliography) and all have provided hyperlinks to them. While it is so convenient to press the hyperlink to jump to the destination, you only have a few seconds of window of opportunity to jump back to where you came from. This is because the back button on the top middle of the screen disappears after several seconds. The time it is allowed to stay around is not even configurable but there is a setting to enable animate page turning.
Surely the developer of this device is not serious and placing animation ahead of functionality. Once that back button disappears, you have no way of bring it back. Returning to where you came from is a major hurdle. I often keeps a piece of scrap paper with my NOOK for jolting the page number.
3) Following on from 2), one wonders why someone would prefer to provide a slider bar to go to the required page when an edit box will be more functional particularly these touch device is so imprecise to allow one to move one page either way. Unless you remember the page number when you press that hyperlink, you will have a hard time finding your way back. As a result, it is a very frustrating device to use. I do not have such issue when reading the PDF on my PC as I can jump to a particular page by entering it into the text box.
It is definitely a sad case of cuteness trumping over functionality.
4) Its ePub reader cannot magnify diagrams at will and this is not only annoying but downright dysfunctional when reading technical materials full of diagrams and charts. Its PDF viewer can magnify but its ePub reader cannot.
5) Because it cannot retain your place in several books, it is very frustrating when using the NOOK as a device to replace physical books. You can do this comfortably on a notebook but not on NOOK.
I definitely will not recommend anyone to buy a NOOK to read technical materials.
This review is from the usage and perspective of a reader of technical materials. Reading technical materials is very different from reading a novel and hence it is important if you are thinking of getting one of these devices, either NOOK or other similar device, make sure you have a good idea of what kind of materials you will be reading. Try it out with a DRM-free technical book.
If you are planning to use it to read technical materials, I will definitely not recommend you to buy a NOOK (color or not) for the reasons I list below. As a previous owner and long time user of a Fujitsu Tablet, I am no strange to this kind of keyboard-less device.
With no apology to Barnes & Nobles, I have not bought one single B&N DRM controlled eBooks for obvious reasons and probably never will. I bought all my DRM-free watermarked eBooks from InformIT, which is a great site for technical eBooks because you can get both ePub and PDF format for the one price; I can put the ePub version in my NOOK and the PDF version in my notebook. That's the wonderful benefit of using DRM-free books. This allows me to test the NOOK and to find which format is the best.
Here are the reasons why I believe a NOOK is a unsuitable reader to handle technical materials:
1) The extremely primitive and incapable user-interface is one of the root cause of problems elaborated below. It makes a mockery of having large storage capability allowing one to carry many books with you when the interface is so primitive that you cannot open several books at the same time, like you would with a physical books on your desk, and returning to the page you left off. This is a severe and fundamental oversight in the design.
In reading a novel, you will not never be reading several at the same time darting from one to another. Hence the ultra primitive user-interface is adequate for that purpose but is totally inadequate for anything else. It is a joy to use my PDF XChange-Viewer to read the PDF format of those eBook on my notebook.
2) Even overlooking the problem mentioned in 1) above and reading just one book (ePub format) is also very frustrating. There is few technical book not containing notes (endnotes, footnotes, or bibliography) and all have provided hyperlinks to them. While it is so convenient to press the hyperlink to jump to the destination, you only have a few seconds of window of opportunity to jump back to where you came from. This is because the back button on the top middle of the screen disappears after several seconds. The time it is allowed to stay around is not even configurable but there is a setting to enable animate page turning.
Surely the developer of this device is not serious and placing animation ahead of functionality. Once that back button disappears, you have no way of bring it back. Returning to where you came from is a major hurdle. I often keeps a piece of scrap paper with my NOOK for jolting the page number.
3) Following on from 2), one wonders why someone would prefer to provide a slider bar to go to the required page when an edit box will be more functional particularly these touch device is so imprecise to allow one to move one page either way. Unless you remember the page number when you press that hyperlink, you will have a hard time finding your way back. As a result, it is a very frustrating device to use. I do not have such issue when reading the PDF on my PC as I can jump to a particular page by entering it into the text box.
It is definitely a sad case of cuteness trumping over functionality.
4) Its ePub reader cannot magnify diagrams at will and this is not only annoying but downright dysfunctional when reading technical materials full of diagrams and charts. Its PDF viewer can magnify but its ePub reader cannot.
5) Because it cannot retain your place in several books, it is very frustrating when using the NOOK as a device to replace physical books. You can do this comfortably on a notebook but not on NOOK.
I definitely will not recommend anyone to buy a NOOK to read technical materials.
Wednesday, August 24, 2011
Good way to study the effect of sgen.exe for XmlSerialization
SGen,exe is a tool from Microsoft to pre-generate a .Net assembly that writes the object to be serialized. There is no shortage of blog posts telling you how to use this, like this one and hence not repeated here.
We developer is often a doubting lot and wanting to know what magical power this sgen.exe has that claims to make some difference in performance or does it? Apart from pre-generating, much like NGen, an serialization assembly to avoid generating it on the fly, one wants to know what is in it and how it works.
The best way answer these questions and to study the output of SGen to convince yourself what it does and that it is indeed loaded is to include the following switches when generating:
SGen basically generates derived classes of XmlSerializationWriter and XmlSerializationReader containing code to write your class members out to or read from the xml data stream. It uses an XmlWriter to write to and XmlReader to read from a XML document. If you study these classes, they share some similar methods.
Generating an Xml Document from class or regenerating it from document can be done in many ways. In fact, if study the generated files you will discover that there is really no magic when calling XmlSerializer.Serialize() and XmlSerializer.Deserialize(). Whether you called XmlWriter or XmlReader to handle your class's member yourself or by some generated code, the process is the same.
If these overheads of dynamic code generation and building or using SGen.exe too troublesome or does not meet your needs, you can cook your own with using various techniques and classes available in .Net
We developer is often a doubting lot and wanting to know what magical power this sgen.exe has that claims to make some difference in performance or does it? Apart from pre-generating, much like NGen, an serialization assembly to avoid generating it on the fly, one wants to know what is in it and how it works.
The best way answer these questions and to study the output of SGen to convince yourself what it does and that it is indeed loaded is to include the following switches when generating:
- /debug - this produces the PDB file
- /keep - this keeps all the temporary files that were used in the process of producing the DLL. It is extremely useful because they (the .cs file) allows you to put break points in the generated code. It is also extremely educational to study them.
- /force - to force the generation. But would be better if you include /o to generate the files to a different directory and then you can delete all files in that directory before running sgen. In this way you are not accumulating the temporary files creating a very confusing situation.
SGen basically generates derived classes of XmlSerializationWriter and XmlSerializationReader containing code to write your class members out to or read from the xml data stream. It uses an XmlWriter to write to and XmlReader to read from a XML document. If you study these classes, they share some similar methods.
Generating an Xml Document from class or regenerating it from document can be done in many ways. In fact, if study the generated files you will discover that there is really no magic when calling XmlSerializer.Serialize() and XmlSerializer.Deserialize(). Whether you called XmlWriter or XmlReader to handle your class's member yourself or by some generated code, the process is the same.
If these overheads of dynamic code generation and building or using SGen.exe too troublesome or does not meet your needs, you can cook your own with using various techniques and classes available in .Net
Labels:
.Net
Tuesday, August 9, 2011
Driven to Linux by Microsoft
I could rephrase this to "Driven away from Windows by Microsoft". No matter which way one words that, Microsoft has driven me away to Linux. As a staunch and long term Microsoft developer since DOS1, a few years back I would not have believed that today I am writing code in Ubuntu using MonoDevelop/Mono.
What causes me to wander away from Microsoft? Several.
1) The advent of machine with Hardware Virtualization Support and Microsoft's draconian unfriendly licensing requirements, including the consumer unfriendly activation scheme is largely the impetus for me to desert Windows. Virtualization (I use VMPlayer but I could easily use VirtualBox) allows developers and ordinary users to have a number of 'dedicated' machines or 'machines' that one can restore very quickly. But if the VM uses Windows, it becomes very expensive. You literally requiring one license per VM. Not only Windows but Office etc.
With more and more multi-core 64-bit machines being released, they are ripe for running VM but Microsoft's not making that easier and economically. Don't try to defend this with XP Mode as it requires XP Professional while other virtual machine software runs happily in lower edition of Windows and offering wider guest OS support.
In fact, Microsoft went out of their way to frighten their customers.
2) Running up a Windows VM and UBuntu VM is like watching a race between a Tortoise and a Hare. Even the size of the VM is a stark contrast. MonoDevelop is also lightningly fast compared VS2010. This shows how inefficient Windows has grown to.
3) It is also my "Pay Microsoft minimum amount" policy. UBuntu has MonoDevelop and OpenOffice and are all free. Who needs to buy Office? I could continue to use my Office 2000 in say XP VM or Win2K VM (no activation).
4) UBuntu brings back the feeling of the carefree days of early Windows experience - no activation to worry and no worry of installing another copy. So carefree and joy to use. Not anymore with Windows. The unknown of UBuntu/Linux is like unchartered water to this newbie with new things awaiting me to discover. Just like the days with Windows in the '80.
5) Mono, not a product from Microsoft, offers me cross-platform cross-language development without using Java that Microsoft has steadfastly refused to embrace. Sorry, you have lost my vote. There are so many good and solid tools that are running fine in both Windows and Linux.
So it takes a lot for this staunch Windows supporter to look for greener pasture but the step is well worth it. I urge all Windows users to give this a try. It is free and nothing to lose! Send a message to Microsoft. It is a joy to live in the Ubuntu VM!
What causes me to wander away from Microsoft? Several.
1) The advent of machine with Hardware Virtualization Support and Microsoft's draconian unfriendly licensing requirements, including the consumer unfriendly activation scheme is largely the impetus for me to desert Windows. Virtualization (I use VMPlayer but I could easily use VirtualBox) allows developers and ordinary users to have a number of 'dedicated' machines or 'machines' that one can restore very quickly. But if the VM uses Windows, it becomes very expensive. You literally requiring one license per VM. Not only Windows but Office etc.
With more and more multi-core 64-bit machines being released, they are ripe for running VM but Microsoft's not making that easier and economically. Don't try to defend this with XP Mode as it requires XP Professional while other virtual machine software runs happily in lower edition of Windows and offering wider guest OS support.
In fact, Microsoft went out of their way to frighten their customers.
2) Running up a Windows VM and UBuntu VM is like watching a race between a Tortoise and a Hare. Even the size of the VM is a stark contrast. MonoDevelop is also lightningly fast compared VS2010. This shows how inefficient Windows has grown to.
3) It is also my "Pay Microsoft minimum amount" policy. UBuntu has MonoDevelop and OpenOffice and are all free. Who needs to buy Office? I could continue to use my Office 2000 in say XP VM or Win2K VM (no activation).
4) UBuntu brings back the feeling of the carefree days of early Windows experience - no activation to worry and no worry of installing another copy. So carefree and joy to use. Not anymore with Windows. The unknown of UBuntu/Linux is like unchartered water to this newbie with new things awaiting me to discover. Just like the days with Windows in the '80.
5) Mono, not a product from Microsoft, offers me cross-platform cross-language development without using Java that Microsoft has steadfastly refused to embrace. Sorry, you have lost my vote. There are so many good and solid tools that are running fine in both Windows and Linux.
So it takes a lot for this staunch Windows supporter to look for greener pasture but the step is well worth it. I urge all Windows users to give this a try. It is free and nothing to lose! Send a message to Microsoft. It is a joy to live in the Ubuntu VM!
Beware of 8.3 names
I wrote a piece of code that used System.IO.Directory.GetFiles(string, string) and executed the code in Windows XP. I was totally dump founded when I discovered this function returning more than what was expecting. The same things happened in Vista & Windows 7.
I was trying to get a list of files '*.tab' from a nominated directory but the result contained files meeting '*.tab*'. I was shocked. So I went to my favourite tool, the command prompt, and did a dir *.tab in the same directory and got the same result. Powershell does not produce this unexpected result.
Knowing System.IO.Directory.GetFiles() most likely P/Invoke down to Win32 API, I search through and confirmed the FindFirstFile() is responsible for this unexpected behaviour.
The explanation of this 'oddity' is given by Raymond Chen and you can blame the 8.3 file names for this problem.
So if you use System.IO.Directory.GetFiles(string, string), make sure you filter your results properly to avoid any unexpected results.
There is no way in CMD to list out a list of files with that kind of requirements.
I was trying to get a list of files '*.tab' from a nominated directory but the result contained files meeting '*.tab*'. I was shocked. So I went to my favourite tool, the command prompt, and did a dir *.tab in the same directory and got the same result. Powershell does not produce this unexpected result.
Knowing System.IO.Directory.GetFiles() most likely P/Invoke down to Win32 API, I search through and confirmed the FindFirstFile() is responsible for this unexpected behaviour.
The explanation of this 'oddity' is given by Raymond Chen and you can blame the 8.3 file names for this problem.
So if you use System.IO.Directory.GetFiles(string, string), make sure you filter your results properly to avoid any unexpected results.
There is no way in CMD to list out a list of files with that kind of requirements.
Thursday, June 30, 2011
There is no dangerous language only dangerous programmers
It seems the data and comments presented in "Software Failures, follies and fallacies" by Les Hatton provide support for the above mentioned comment.
From my personal development experience and reviewing code, a person using an object-oriented language, be it C++, C# or Java, does not necessary mean that the developer is apply sound OO principles to their creation. It is the use of these principles that give rise to the touted benefit.
If they are not applied, C++ and other object-oriented language can result in system with far worse defects and undesirable features than say a procedural language like C. Lakos echos similar sentiment:
We all extol the benefits of our favourite programming language whilst denigrating other languages less attractive to us. In truth, published data from around the world of which Table 2 is a subset shows that there is no clear relationship between programming language and the defect density of systems implemented in that language. Ada, for example, supposedly far more secure than other languages produces systems of comparable defect density. In contrast, C is reviled by many safety-related developers and yet it is responsible for some of the most reliable systems ever written. We can conclude that programming language choice is at best weakly related to reliability.However, I doubt his conclusion from
a recent study comparing two similar systems of similar size, (around 50,000 lines each), one in C and one in object-designed C++, the resulting defect densities were shown to be around the same at 2.4 and 2.9 per 1000 lines respectively,that
that language has little effect on reliability, object-oriented or not and that the massive drive to object-orientation is another giant leap sideways in which the software industry appears to specialise.It is unfair to conclude a programming paradigm is defective or fail to live up to its touted benefit from observation on a programming language. This is particularly true for C++ which actually is hybrid in nature - a better-C and Object-Oriented programming language.
From my personal development experience and reviewing code, a person using an object-oriented language, be it C++, C# or Java, does not necessary mean that the developer is apply sound OO principles to their creation. It is the use of these principles that give rise to the touted benefit.
If they are not applied, C++ and other object-oriented language can result in system with far worse defects and undesirable features than say a procedural language like C. Lakos echos similar sentiment:
It is completely wrong, however, to think that just using C++ will ensure success in a large project.
C++ is not just an extension of C: it supports an entirely new paradigm. The object-oriented paradigm is notorious for demanding more design effort and savy than its procedural counterpart. C++ is more difficult to master than C, and there are innumerable ways to shoot yourself in the foot. Often you won't realize a serious error until it is too late to fix it and still meet your schedule, such as indiscriminate use of virtual functions or passing of user-defined types by value, can result in perfectly C++ programs that run ten times slower than they would have had you written them in C.
[...]
Unfortunately, the undisciplined techniques used to create small programs in C++ are totally inadequate for tackling larger projects. That's to say, a naive application of C++ technology does not scale well to larger projects. The consequences for the uninitiated are many.
"Large-Scale C++ Software Design" by John Lakos, page 2, Anddison Wesley Longman Inc. 1996
Labels:
C++,
Programming,
Software Development
Wednesday, June 29, 2011
What a piece of gem
I am so glad that I have rediscovered this piece of gem titled "Good practice in software" by Les Hatton. It should be read by any aspiring or experienced developer. It is full of timeless advices and facts, like this:
Perhaps the biggest surprise in an industry awash with technology is that it doesn’t seem to make much difference. By far the biggest factor which emerges in most studies is the individual quality of the engineers who build a system and a few common-sense principles. We have known this since the admirable book by Fred Brooks, [6] which every aspiring software producer should read. Rather than absorbing this powerful lesson, the computing industry became obsessed with the notion that the process or bureaucracy of building software was the most important part. The evidence is to the contrary. No matter how well- defined a process, the quality of the product still depends mostly on the quality of the engineers who build it. To this, a good process can bring consistency and accountability.So true! People are forever darting from one language to another or from one technology to another in an escapist attitude to avoid the hard slog of learning the trade!
Labels:
Programming,
Software Development
Thursday, June 23, 2011
Using PasswordSafe in Windows 7
I am surprise to see the latest version (ver 3.25.0.4042) still does not have the embedded manifest file to tell the Windows 7 the execution level required. When you invoke the pwsafe.exe, it triggers the UAC's attention resulting in this message box:
Not nice. You can prevent this by unblocking it. To do this, you right mouse click on the program in Windows Explorer and then select the properties. On the general tab in the properties dialog box, you should see the 'Unblock" button as circled below:
Once unblocked, it will not throw up that frightening first message box.
Let's hope this annoying issue will be addressed soon.
Not nice. You can prevent this by unblocking it. To do this, you right mouse click on the program in Windows Explorer and then select the properties. On the general tab in the properties dialog box, you should see the 'Unblock" button as circled below:
Once unblocked, it will not throw up that frightening first message box.
Let's hope this annoying issue will be addressed soon.
Labels:
PasswordSafe,
Security
Tuesday, June 14, 2011
Safe way to close down a ClientBase<T> object
Recently a good friend of drew my attention to a situation I consider a bad disposable object implement and pointed me kindly to an MSDN article offering some words of advice and work around. This saved me considerable research effort. I will address the bad disposable object implementation later.
The problem is that when using the using statement to dispose a ClientBase(of T) object, the base class of WCF proxy stub, you can run into a problem where leaving the using scope the exception thrown in the Dispose() can mask out application exception that causes the execution to leave the scope in the first place. This denies the client the knowledge of the application exception.
The MSDN article explains this phenomenon very clearly and provides an excellent recommendation to handle this problem. The recommended work around places higher value on the application exception as compared to the one that is likely to happen when the Dispose(), or rather Close(), is called. The recommendation uses a multi-level of catches and then use either ClientBase(of T).Close() or its no-throw equivalent ClientBase(of T).Abort() strategically.
While the work around code presented in the article is written in a style that shows clearly how to tackle the problem, it is rather clumsy to use in real life. It expects too much from developer to know when to called Abort() and when to call Close(). Furthermore, it changes the shape of the code drastically to a somewhat awkward and unfamiliar form. It therefore leaves room for improvement and a simple class designed to meet the following objectives is described below:
This SomeHelper class must meet the above objectives.
Recognizing the issue at hand is how to handle the System.ServiceModel.ICommunicationObject, one of the interfaces implemented by ClientBase(of T), I construct the following disposable generic class called SafeCommunicationDisposal.
The fundamental principle is to tell SafeCommunicationDisposal what to use to close down the ICommunicationObject when execution leaves the using scope.
If it leaves by normal means, the SafeCommunicationDisposal.SafeToClose() sets a flag to tell the SafeCommunicationDisposal.Dispose() to use ICommunicationObject.Close(). If Close() throws an exception, it is then passed to the user.
If it leaves as a result of an application exception, SafeCommunicationDisposal.SafeToClose() will not be executed and SafeCommunicationDisposal.Dispose() will use ICommunicationObject.Abort(), a no-throw method to shut down the communication gracefully. This does not mask out the application exception.
If a developer forgets to call SafeCommunicationDisposal.SafeToClose(), the SafeCommunicationDisposal.Dispose() takes a more defensive route of calling the ICommunicationObject.Abort().
I believe this class can help to make the code more readable, maintainable and using a familiar using pattern to correct the naive usage that minimize code change.
The moral of this story is that if you are designing a disposable object, it is wise to heed the following advice "Avoid throwing an exception from within Dispose except under critical situations where the containing process has been corrupted (leaks, inconsistent shared state, etc.).". This is a well known no-no in unmanaged C++.
The problem is that when using the using statement to dispose a ClientBase(of T) object, the base class of WCF proxy stub, you can run into a problem where leaving the using scope the exception thrown in the Dispose() can mask out application exception that causes the execution to leave the scope in the first place. This denies the client the knowledge of the application exception.
The MSDN article explains this phenomenon very clearly and provides an excellent recommendation to handle this problem. The recommended work around places higher value on the application exception as compared to the one that is likely to happen when the Dispose(), or rather Close(), is called. The recommendation uses a multi-level of catches and then use either ClientBase(of T).Close() or its no-throw equivalent ClientBase(of T).Abort() strategically.
While the work around code presented in the article is written in a style that shows clearly how to tackle the problem, it is rather clumsy to use in real life. It expects too much from developer to know when to called Abort() and when to call Close(). Furthermore, it changes the shape of the code drastically to a somewhat awkward and unfamiliar form. It therefore leaves room for improvement and a simple class designed to meet the following objectives is described below:
- Perform automatic closure of the ICommunicationObject using the same elegant form of the using statement.
- Prevent the application exception, the one that is thrown inside the using statement scope, from being masked and passing it to the client unhindered.
- If no application exception, it allows exception that can be generated during the disposable of the ICommunicationObject to pass to the client.
public void GetSomeData( .... ) { using( SomeHelper helper = new SomeHelper( new CalculatorClient() ) ) { GetData( helper.Instance() ); }
This SomeHelper class must meet the above objectives.
Recognizing the issue at hand is how to handle the System.ServiceModel.ICommunicationObject, one of the interfaces implemented by ClientBase(of T), I construct the following disposable generic class called SafeCommunicationDisposal.
using System; using System.ServiceModel; namespace DisposingCommunicationObject { public class SafeCommunicationDisposal<T> : IDisposable where T:ICommunicationObject { public T Instance { get; private set; } public bool IsSafeToClose{ get; private set; } public void SafeToClose() { this.IsSafeToClose = true; } public SafeCommunicationDisposal (T client) { this.Instance = client; } bool disposed; public void Dispose() { Dispose( true ); GC.SuppressFinalize( this ); } private void Dispose( bool disposing ) { if( !this.disposed ) { if( disposing ) { Close(); } this.disposed = true; } } private void Close() { if( IsSafeToClose ) { Instance.Close(); } else { Instance.Abort(); } } } }With this class I can rewrite the naive usage of using the using statment from this:
using( CalculatorClient client = new CalculatorClient() ) { // use client to do work }to this form which does not mask out application exception on leaving the using scope and yet closing down the communication object properly:
using( SafeCommunicationDisposal<CalculatorClient> d =
new SafeCommunicationDisposal<CalculatorClient> ( new CalculatorClient() ) ) {
CalculatorClient client = d.Instance;
// use the client to do work. If it throws exception
// it will be passed over to the client and not masked by exception
// when client.Close() is executed.
d.SafeToClose(); // This tells SafeCommunicationDisposal to use ICommunicationObject.Close()
}
The fundamental principle is to tell SafeCommunicationDisposal what to use to close down the ICommunicationObject when execution leaves the using scope.
If it leaves by normal means, the SafeCommunicationDisposal.SafeToClose() sets a flag to tell the SafeCommunicationDisposal.Dispose() to use ICommunicationObject.Close(). If Close() throws an exception, it is then passed to the user.
If it leaves as a result of an application exception, SafeCommunicationDisposal.SafeToClose() will not be executed and SafeCommunicationDisposal.Dispose() will use ICommunicationObject.Abort(), a no-throw method to shut down the communication gracefully. This does not mask out the application exception.
If a developer forgets to call SafeCommunicationDisposal.SafeToClose(), the SafeCommunicationDisposal.Dispose() takes a more defensive route of calling the ICommunicationObject.Abort().
I believe this class can help to make the code more readable, maintainable and using a familiar using pattern to correct the naive usage that minimize code change.
The moral of this story is that if you are designing a disposable object, it is wise to heed the following advice "Avoid throwing an exception from within Dispose except under critical situations where the containing process has been corrupted (leaks, inconsistent shared state, etc.).". This is a well known no-no in unmanaged C++.
Thursday, June 2, 2011
Caveat in using Matlab DLL in ASP.Net
I was crafting another WCF Service using basicHttpBinding that used a Matlab NE Builder generated DLL and was keep getting System.TypeLoadException on the class' constructor. This problem was reported but there was no conclusive explanation other than ASP.Net page time out being set too short. But setting that to a long value did not help and certainly did not help me.
My unit test using the DLL directly proved that the DLL was working perfectly and hence it must be environmental. Not the person easily defeated by this kind of issue, I took this up as just another challenge. There must be an explanation.
Being certain that it was environmental, I began to investigate the ASP.Net runtime infrastructure. I know when the Matlab DLL, called a deployment DLL for all the right reasons, was first used, Matlab runtime will expand the embedded scripts, albeit compiled, into a runtime folder structure under the matlab_mcr folder. This folder structure replicates the same directory structure from the root directory when the deployment DLL was built. matlab_mcr is the folder created at a nominated directory.
So if when you build your deployment DLL in say
Matlab runtime will try to create a structure like this:
where [TempDirBase] is the base directory where your run time is going to use. If it is the %Temp%, then in XP it is
If you are like me using very verbose and easy to remember project name organized in a meaningful directory structure and using long matlab function names, the path that Matlab replicates at run time can easily approach the maximum limit of a path name.
In fact the cause of the TypeLoadException by the matlab class was exactly what I encountered when I was using NUnit to unit test code that used a deployment DLL. In NUnit I simply changed the shadow copy directory.
In the ASP.Net situation, the length of the default ASP.Net temporary directory path coupled with my build location caused the operating system to fail to recreate the matlab run time structure because the path length exceeded the maximum permissible length.
Armed with prior experience and the ability to change the ASP.Net default temporary file directory to a different location, I used a 2-prong attack to ensure that matlab's silly recreation of the directory structure will not bother me anymore.
Here is my technique to fix this problem:
1) Build the deployment DLL in a extremely cryptic and shorten path.
Build the deployment DLL in a very cryptic and extremely short path like this c:\0\1 or c:\01. I choose this pattern because I am going to have several of these deployment DLLs and so each one can grab a different number as directory name. Since this is only for run time and it is totally meaningless human, they can be very cryptic. The aim is to make it as short and as flat as one can get and.spare no mercy to Matlab.
You can even use these numbers as your extra version identifier.
This recommendation does not mean that you need to store your project that way. You should store them in as meaningful directory as you can for easy of maintenance and team spirit. By default action, when Matlab replicates this at runtime, it discloses your project organization and if you store your project in the profile area, Matlab can disclose who you are. By using a cryptic and short build path, you prevent Matlab from disclosing such information.
Since I always build these deployment DLLs with a batch file for good technical reasons, it is only a matter of adding extra batch commands to create the build directory described above, and to copy all the .m, .prj, .cs, etc. to the build directory and kick the build from that build directory.
When the build is finished, you copy the output files back to a generated output directory in my solution. The batch file can include commands to remove the build directory.
2) As an extra measure to combat this Matlab bloated directory problem, you change the ASP.Net temporary file directory, which is conveniently applicable per application using the web.config.
This can be achieved by specifying your temporary directory to a very short path name in the tempDirectory attribute of the <compilation> element.
Arm with these and a very long "ASP Script timeout" for your application, changed via the IIS console, you should have no trouble with using Matlab deployment DLL in IIS.
My unit test using the DLL directly proved that the DLL was working perfectly and hence it must be environmental. Not the person easily defeated by this kind of issue, I took this up as just another challenge. There must be an explanation.
Being certain that it was environmental, I began to investigate the ASP.Net runtime infrastructure. I know when the Matlab DLL, called a deployment DLL for all the right reasons, was first used, Matlab runtime will expand the embedded scripts, albeit compiled, into a runtime folder structure under the matlab_mcr folder. This folder structure replicates the same directory structure from the root directory when the deployment DLL was built. matlab_mcr is the folder created at a nominated directory.
So if when you build your deployment DLL in say
"C:\Documents and Settings\John\My Documents\My Matlab Project"
Matlab runtime will try to create a structure like this:
[TempDirBase]\matlab_mcr\Documents and Settings\John\My Documents\My Matlab Project
where [TempDirBase] is the base directory where your run time is going to use. If it is the %Temp%, then in XP it is
"C:\Documents and Settings\John\Local Settings\Temp"
If you are like me using very verbose and easy to remember project name organized in a meaningful directory structure and using long matlab function names, the path that Matlab replicates at run time can easily approach the maximum limit of a path name.
In fact the cause of the TypeLoadException by the matlab class was exactly what I encountered when I was using NUnit to unit test code that used a deployment DLL. In NUnit I simply changed the shadow copy directory.
In the ASP.Net situation, the length of the default ASP.Net temporary directory path coupled with my build location caused the operating system to fail to recreate the matlab run time structure because the path length exceeded the maximum permissible length.
Armed with prior experience and the ability to change the ASP.Net default temporary file directory to a different location, I used a 2-prong attack to ensure that matlab's silly recreation of the directory structure will not bother me anymore.
Here is my technique to fix this problem:
1) Build the deployment DLL in a extremely cryptic and shorten path.
Build the deployment DLL in a very cryptic and extremely short path like this c:\0\1 or c:\01. I choose this pattern because I am going to have several of these deployment DLLs and so each one can grab a different number as directory name. Since this is only for run time and it is totally meaningless human, they can be very cryptic. The aim is to make it as short and as flat as one can get and.spare no mercy to Matlab.
You can even use these numbers as your extra version identifier.
This recommendation does not mean that you need to store your project that way. You should store them in as meaningful directory as you can for easy of maintenance and team spirit. By default action, when Matlab replicates this at runtime, it discloses your project organization and if you store your project in the profile area, Matlab can disclose who you are. By using a cryptic and short build path, you prevent Matlab from disclosing such information.
Since I always build these deployment DLLs with a batch file for good technical reasons, it is only a matter of adding extra batch commands to create the build directory described above, and to copy all the .m, .prj, .cs, etc. to the build directory and kick the build from that build directory.
When the build is finished, you copy the output files back to a generated output directory in my solution. The batch file can include commands to remove the build directory.
2) As an extra measure to combat this Matlab bloated directory problem, you change the ASP.Net temporary file directory, which is conveniently applicable per application using the web.config.
This can be achieved by specifying your temporary directory to a very short path name in the tempDirectory attribute of the <compilation> element.
Arm with these and a very long "ASP Script timeout" for your application, changed via the IIS console, you should have no trouble with using Matlab deployment DLL in IIS.
Labels:
.Net,
ASP.Net,
Matlab,
Web Service
Thursday, May 26, 2011
Apple users, welcome to the real world!
If you have not be told Apple has virus, you should read this excellent piece of report on "Mac Defender" malware. As a realist, I have always laughed when Apple users naively believing the sales hype that it does not virus!
I have been following this Mac Defender for a long time, way before the popular media outlets picked up the story. When I first heard of this, I realized that apple users' age of innocence had just been shattered. My warning to my friends and view on the vulnerability of these unrealistically complacent apple users would be like pack of drunken sheep to the wolf. Surprisingly this same sentiment is echoed by Molly Wood in CNet. As a developer, software bounds to have vulnerability as it is crafted by human. Besides, Apple OS is just a Unix and Unix/Linux is always known to be malware/virus.
I believe it will become a lot worse before it gets some form of relief because these attackers have sharpened its modi operandi in the Windows world and that Windows users are conditioned to become more alert about Virus/Trojan/Malware. To these attackers, Apple users are trusting lot!
I am not surprised to learn of the recommended "unhelpful" customer service directive of Apple as reported. Molly has outlined several notable exemplars of Apple's unhelpfulness. The "silence-then-solution pattern" will definitely put their users at great risk through Apple's generosity of giving attackers a wide windows of attack opportunity. Does Apple care? I doubt it. The old "Poison DNS attack" and Apple's slowness in addressing this is just another shiny example of a company cares more about a facade than what's behind and its users well beings.
I have been following this Mac Defender for a long time, way before the popular media outlets picked up the story. When I first heard of this, I realized that apple users' age of innocence had just been shattered. My warning to my friends and view on the vulnerability of these unrealistically complacent apple users would be like pack of drunken sheep to the wolf. Surprisingly this same sentiment is echoed by Molly Wood in CNet. As a developer, software bounds to have vulnerability as it is crafted by human. Besides, Apple OS is just a Unix and Unix/Linux is always known to be malware/virus.
I believe it will become a lot worse before it gets some form of relief because these attackers have sharpened its modi operandi in the Windows world and that Windows users are conditioned to become more alert about Virus/Trojan/Malware. To these attackers, Apple users are trusting lot!
I am not surprised to learn of the recommended "unhelpful" customer service directive of Apple as reported. Molly has outlined several notable exemplars of Apple's unhelpfulness. The "silence-then-solution pattern" will definitely put their users at great risk through Apple's generosity of giving attackers a wide windows of attack opportunity. Does Apple care? I doubt it. The old "Poison DNS attack" and Apple's slowness in addressing this is just another shiny example of a company cares more about a facade than what's behind and its users well beings.
Monday, May 16, 2011
Complete build process of Matlab component for .Net with meaningful version number
The previous post describes a recipe for injecting meaningful version number into NE Builder produced .Net assembly.
The normal NE Builder generated assembly using non-embedded CTF archive requires the use of a /linkres option to describe in its manifest an external linkage to a CTF file. Unfortunately, this option is not available to Visual Studio C# project and as a result, it is not possible to create a class library project in Visual Studio to automate this process; batch process is still required.
Incidentally, some one in the forum was puzzled by the automatic copying of the CTF file when one referenced a NE Builder generated assembly. This action is caused by the use of /linkres option.
In view of the need to use a batch file or command line operation to build the .Net assembly, it is therefore advantage to automate all the build steps without the need to invoke the Matlab IDE to generate the external CTF archive, the C# and companion files. Below are the steps to construct this batch file:
1. Create a command to invoke the Matlab component compiler (mcc.exe) to generate the CTF archive, the C# and companion files using the -F switch with your Matlab prj file. To see this switch usage, invoke MCC -? on a command prompt.
This switch takes all the setting from the PRJ file and is a convenient way to centralize all the specifications in one place that is also available to Matlab IDE, which is a convenient visual tool to set those specifications.
2. Execute the CSC.EXE, the C# compiler, with the response file constructed as described in the previous post.
This batch program generates all the files afresh without the need to rely on part IDE build and part command line build to complete the process; this technique ensures all files are in sync and that it can be incorporated into any automatic build process.
The normal NE Builder generated assembly using non-embedded CTF archive requires the use of a /linkres option to describe in its manifest an external linkage to a CTF file. Unfortunately, this option is not available to Visual Studio C# project and as a result, it is not possible to create a class library project in Visual Studio to automate this process; batch process is still required.
Incidentally, some one in the forum was puzzled by the automatic copying of the CTF file when one referenced a NE Builder generated assembly. This action is caused by the use of /linkres option.
In view of the need to use a batch file or command line operation to build the .Net assembly, it is therefore advantage to automate all the build steps without the need to invoke the Matlab IDE to generate the external CTF archive, the C# and companion files. Below are the steps to construct this batch file:
1. Create a command to invoke the Matlab component compiler (mcc.exe) to generate the CTF archive, the C# and companion files using the -F switch with your Matlab prj file. To see this switch usage, invoke MCC -? on a command prompt.
This switch takes all the setting from the PRJ file and is a convenient way to centralize all the specifications in one place that is also available to Matlab IDE, which is a convenient visual tool to set those specifications.
2. Execute the CSC.EXE, the C# compiler, with the response file constructed as described in the previous post.
This batch program generates all the files afresh without the need to rely on part IDE build and part command line build to complete the process; this technique ensures all files are in sync and that it can be incorporated into any automatic build process.
Wednesday, May 11, 2011
Caveat in NUnit testing Matlab NE Builder produced assembly
Matlab NE Builder produced assembly when loaded needs to extract the compiled scripts into a cache area which is normally in the executable directory.
When such an assembly is being tested in NUnit, with Shadow Copying enabled (the default setting), can generate a file path that exceeds the maximum length of file path name of Windows.
When this happens, it can generate an exception saying that it cannot create an instance of MCR. Check the Text output tab of NUnit console for any message saying something like this:
is a very long path name.
If you still want to use Shadow Copying support in NUnit, go to NUnit's console's settings dialog and specify a short directory as a Shadow Copy Cache, which is usually your "%Temp%\nunit20\ShadowCopyCache". Alternately turn off the Shadow Copy can help to alleviate this problem.
When such an assembly is being tested in NUnit, with Shadow Copying enabled (the default setting), can generate a file path that exceeds the maximum length of file path name of Windows.
When this happens, it can generate an exception saying that it cannot create an instance of MCR. Check the Text output tab of NUnit console for any message saying something like this:
Failed to removeWhere
Verify file ownership and access permissions.
If you still want to use Shadow Copying support in NUnit, go to NUnit's console's settings dialog and specify a short directory as a Shadow Copy Cache, which is usually your "%Temp%\nunit20\ShadowCopyCache". Alternately turn off the Shadow Copy can help to alleviate this problem.
Labels:
Matlab,
NUnit,
Unit Testing
Recipe to add assembly version to Matlab NE Builder produced assembly
Matlab NE Builder, once called .Net Builder, is a tool from Matlab to package the Matlab script (the .m files) into a .Net assembly making the functions specified in Matlab available to .Net application.
This tool unfortunately provides token .Net infrastructure support. It can produce a strongly named assembly but it does not have facility to allow user to specify the assembly version data, a vital piece of information to support strict version binding policy.
Every assembly produced by NE Builder, regardless strongly name or not, has the version 0.0.0.0. As a result it is not very useful. According to Matlab forum, Matlab does not have any way to deal with this issue. This blog post provides a recipe of allowing user to add assembly version or any other pieces of information to the NE Builder produced assembly. It does not use any undocumented or hack to produce the result; it simply uses the same standard .Net build specifications the NE Builder Deployment tool uses.
This recipe then allows one to produce a versionable strong name .Net assembly that carries the Matlab script.
This tool unfortunately provides token .Net infrastructure support. It can produce a strongly named assembly but it does not have facility to allow user to specify the assembly version data, a vital piece of information to support strict version binding policy.
Every assembly produced by NE Builder, regardless strongly name or not, has the version 0.0.0.0. As a result it is not very useful. According to Matlab forum, Matlab does not have any way to deal with this issue. This blog post provides a recipe of allowing user to add assembly version or any other pieces of information to the NE Builder produced assembly. It does not use any undocumented or hack to produce the result; it simply uses the same standard .Net build specifications the NE Builder Deployment tool uses.
- From the Deployment tool (read NE Builder) build the assembly and making sure that you clear the "Embedded the CTF Archive". This is because when you use embedded CTF archive, the builder deletes the CTF file after building the assembly. We need this file in the subsequent steps and hence not embedding lets us access this file. Furthermore, it is easier if you make the source and output directory the same.
- Save the build steps to a build log and open it with a text editor.
- Look out for the line containing the CSC.exe as we will use that to produce the response file for the C# compiler.
- Copy the text after CSC.exe into a text file. You can insert line break to make it more readable.
- Add AssemblyInfo.cs to the response file. This file contains the assembly information you wish to include into the assembly.
- Save the text file into a file with customary .rsp extension into the same directory as the source.
- Create the AssemblyInfo.cs file if not exist and specify your assembly version and other information.
- Run the CSC with this response file to produce your assembly which will contain the required assembly version.
This recipe then allows one to produce a versionable strong name .Net assembly that carries the Matlab script.
Tuesday, May 10, 2011
No use to defend an indefensibly bad user-interface
I was shown a web application designed to manage project and progress report, a la MS Project minus the Gantt Chart and time lines. I am not a user of any project management software and hence this post is not about its capability, though my shallow knowledge of this topics tells me that it is rather incomplete.
As a software developer with low tolerance of terrible user-interface that only the creator loves, this package has an example that really enrages me into highlighting it here. Consider the following screen capture showing the 'Select All' check box circled in red:
A shaped-eye reader will instinctively spotted something amiss in this diagram; the 'Select All' check box is checked but the other check boxes in that column aren't. Well according to the developer, this is a feature. Let me describe how this 'Select All' check boxes works in this application.
Normally it was not checked and the user can check the relevant row by using the check box which operates in the standard manner.
The minute you click on the 'Select All' check box, it selects all the check boxes in that column. Nothing strange about that and it is the same behavior as in Google's GMail or Hotmail, just to name a few.
However, the user-interface becomes non-intuitive and distorted when you try to de-select all the check boxes by clicking on the 'Select All' check box, which by now has a check mark on it, an operation that comes naturally for all users. In this crazy illogical scheme, the developer literally creates a different class of check box but with the same look and few as the standard ones. A true bastardization of the check box.
When one clicks on the 'Select All' check box the second time, instead of extinguishing the check mark and de-selecting all the check boxes in that column as everyone (bar the developer) expects, this check box maintains the check mark - a kind of one shot check box but not exactly. If you click it in vain attempt to de-select all, it in fact selects all check boxes for you. There is no way to de-select all and the 'Select All' check box is not disabled or grayed out. Because there is no way to uncheck all the check boxes in one operation, experimentation with this 'Select All' check box brings frustration and curses.
The standard check box has a binary state - click it to check and click it to uncheck - and that is the behavior any user expects when one sees a check box and not a distorted one as highlighted.
The developer maintains that it is not a bug but a feature just like so many when cornered into this indefensible situation. I think the developer is wise to read this book and constantly reminding himself that "Your user is not you". In clear vain attempt to defend the indefensibly wrong user-interface, the developer offers this way to extinguish the 'Select All' check mark, which is an other brilliant example of his failure to understand "Your user is not you":
The correction of this distorted user-interface is technically very simple but psychologically difficult as the developer has to admit that it is a bug first; they already have JavaScript code to check each check boxes in the column and it is just a simple process to run the code through to uncheck them. In fact, they expand more efforts in creating this only-creator-loves distorted user interface.
This software also has another time wastage feature, which clearly has not gone through any user design mock up such as "Paper Prototyping" by Carolyn Snyder.
There is a page where the user has to indicate the progress of each milestone and at which date. The date must be entered by using 3 combo boxes - one for day, month and year as shown here:
This means to enter a date, one needs a minimum of 6 mouse clicks. If the date can be entered via a edit box with an calendar icon to invoke it if necessary, the user could enter the date much more quickly.
But this will require more developer's input validation effort. Since it is already using client side JavaScript, it is not such a big deal with the most difficult task for determining the client's locale.
When you have a dozen or so dates to enter, every saving is a real bonus and the developer should consider redesigning. The current design is really for the benefit of the developer - saving code. In fact, they still need to validate the input. What if someone select 31/04 or 30/02?
As a software developer with low tolerance of terrible user-interface that only the creator loves, this package has an example that really enrages me into highlighting it here. Consider the following screen capture showing the 'Select All' check box circled in red:
A shaped-eye reader will instinctively spotted something amiss in this diagram; the 'Select All' check box is checked but the other check boxes in that column aren't. Well according to the developer, this is a feature. Let me describe how this 'Select All' check boxes works in this application.
Normally it was not checked and the user can check the relevant row by using the check box which operates in the standard manner.
The minute you click on the 'Select All' check box, it selects all the check boxes in that column. Nothing strange about that and it is the same behavior as in Google's GMail or Hotmail, just to name a few.
However, the user-interface becomes non-intuitive and distorted when you try to de-select all the check boxes by clicking on the 'Select All' check box, which by now has a check mark on it, an operation that comes naturally for all users. In this crazy illogical scheme, the developer literally creates a different class of check box but with the same look and few as the standard ones. A true bastardization of the check box.
When one clicks on the 'Select All' check box the second time, instead of extinguishing the check mark and de-selecting all the check boxes in that column as everyone (bar the developer) expects, this check box maintains the check mark - a kind of one shot check box but not exactly. If you click it in vain attempt to de-select all, it in fact selects all check boxes for you. There is no way to de-select all and the 'Select All' check box is not disabled or grayed out. Because there is no way to uncheck all the check boxes in one operation, experimentation with this 'Select All' check box brings frustration and curses.
The standard check box has a binary state - click it to check and click it to uncheck - and that is the behavior any user expects when one sees a check box and not a distorted one as highlighted.
The developer maintains that it is not a bug but a feature just like so many when cornered into this indefensible situation. I think the developer is wise to read this book and constantly reminding himself that "Your user is not you". In clear vain attempt to defend the indefensibly wrong user-interface, the developer offers this way to extinguish the 'Select All' check mark, which is an other brilliant example of his failure to understand "Your user is not you":
- Uncheck each check box in the column manually
- Then press the update
- The round trip back from the server will clear the 'Select All' check box.
The correction of this distorted user-interface is technically very simple but psychologically difficult as the developer has to admit that it is a bug first; they already have JavaScript code to check each check boxes in the column and it is just a simple process to run the code through to uncheck them. In fact, they expand more efforts in creating this only-creator-loves distorted user interface.
This software also has another time wastage feature, which clearly has not gone through any user design mock up such as "Paper Prototyping" by Carolyn Snyder.
There is a page where the user has to indicate the progress of each milestone and at which date. The date must be entered by using 3 combo boxes - one for day, month and year as shown here:
This means to enter a date, one needs a minimum of 6 mouse clicks. If the date can be entered via a edit box with an calendar icon to invoke it if necessary, the user could enter the date much more quickly.
But this will require more developer's input validation effort. Since it is already using client side JavaScript, it is not such a big deal with the most difficult task for determining the client's locale.
When you have a dozen or so dates to enter, every saving is a real bonus and the developer should consider redesigning. The current design is really for the benefit of the developer - saving code. In fact, they still need to validate the input. What if someone select 31/04 or 30/02?
Saturday, April 30, 2011
Example of ill-treating your users - Link Market Services
This is a good example of how a company mistreating their users. Link Market Services is an Australia share registry company. It is strange that they add the word 'Services' to their company name as you will see how disservice it is to their users.
Users of this company generally has no choice. They become this company's users by being shareholders of companies that have outsourced the share registry activities to them. Hence in many ways they have no choice and this allows them to ill-treat their users as the follow examples show. This is why they believe that they can afford to ill treat their users.
This company has a chequer history of providing bad services. When it became a share registry company, it did its maintenance update at the very busy time when users querying their portfolio. It did not think that they need to notify their users upon log in that they could not access their portfolio data; very often in many case it simply told me my portfolio information were invalidated but in fact they were not; they were just updating some data. It is just amateurish to say the least.
At one time, it even tempered with my log in password that I needed to complaint to the organisation to have that rectified and assured no security breach.
The latest disservice to their users is when they redesigned their system without forewarning their users and advising them of the changes. They just did it. So users' normal log in no longer valid. Even after they successfully registered, they have lost their previously organised portfolio and they have to start again. I guess it is cheaper to simply ignoring their users because they have no choice.
This is hardly going to earn you, LinkMarketServices, respect and loyalty from your users. As they say leopard does not change its spots is so damn true. This is a real life example of how you do not design a new system without considering migration issues and your users.
Users of this company generally has no choice. They become this company's users by being shareholders of companies that have outsourced the share registry activities to them. Hence in many ways they have no choice and this allows them to ill-treat their users as the follow examples show. This is why they believe that they can afford to ill treat their users.
This company has a chequer history of providing bad services. When it became a share registry company, it did its maintenance update at the very busy time when users querying their portfolio. It did not think that they need to notify their users upon log in that they could not access their portfolio data; very often in many case it simply told me my portfolio information were invalidated but in fact they were not; they were just updating some data. It is just amateurish to say the least.
At one time, it even tempered with my log in password that I needed to complaint to the organisation to have that rectified and assured no security breach.
The latest disservice to their users is when they redesigned their system without forewarning their users and advising them of the changes. They just did it. So users' normal log in no longer valid. Even after they successfully registered, they have lost their previously organised portfolio and they have to start again. I guess it is cheaper to simply ignoring their users because they have no choice.
This is hardly going to earn you, LinkMarketServices, respect and loyalty from your users. As they say leopard does not change its spots is so damn true. This is a real life example of how you do not design a new system without considering migration issues and your users.
Labels:
Mismanagement
Friday, April 29, 2011
Design for only the computer illiterates - AVG Downloads
If you are a developer of download manager and installer of AVG or others, please do not simply develop your download page to cater for the computer illiterates by forcing your users to use the download manager. I know many people will not have any idea which one to download - 32-bit version or the 64-bit.
But there are plenty of people out there that are very competent probably more so than you that can distinguish a 32-bit environment verse 64-bit one. Forcing everyone to use a download manager also is wasting bandwidth. Many people have several machines and it seems totally illogical to have each machine downloading the same thing because your download manager is so dumb that do not even offer the user a chance to save the file.
You should also need to cater for those power users.
Adobe Flash used to be notorious in hiding the place where one can download the stand alone installer but lately it has a change of heart and ditching the annoying download manager. This is much appreciated and a lesson AVG should learn.
In fact, I find AVG so annoying that I have been looking for programs that supply their users full installer for download and one such program is Avast. Indeed I have replaced AVG on several of my machines (real and virtual) with Avast and finding that a refreshingly good nimble no-nonsense replacement of a lumbering monster.
It seems with each new version AVG has gone backwards by a few steps. Now it is draconian attitude in forcing users to use their download manager. What next. Time for a serious rethink of using AVG.
But there are plenty of people out there that are very competent probably more so than you that can distinguish a 32-bit environment verse 64-bit one. Forcing everyone to use a download manager also is wasting bandwidth. Many people have several machines and it seems totally illogical to have each machine downloading the same thing because your download manager is so dumb that do not even offer the user a chance to save the file.
You should also need to cater for those power users.
Adobe Flash used to be notorious in hiding the place where one can download the stand alone installer but lately it has a change of heart and ditching the annoying download manager. This is much appreciated and a lesson AVG should learn.
In fact, I find AVG so annoying that I have been looking for programs that supply their users full installer for download and one such program is Avast. Indeed I have replaced AVG on several of my machines (real and virtual) with Avast and finding that a refreshingly good nimble no-nonsense replacement of a lumbering monster.
It seems with each new version AVG has gone backwards by a few steps. Now it is draconian attitude in forcing users to use their download manager. What next. Time for a serious rethink of using AVG.
Labels:
AVG
Sunday, March 13, 2011
Definition of a "Bad Design" in software
I came across this succinct definition of what constitutes a "Bad Design" in software:
- It is hard to change because every change affects too many other parts of the system. (Rigidity)
- When you make a change, unexpected parts of the system break. (Fragility)
- It is hard to reuse in another application because it cannot be disentangled from the current application. (Immobility).
Labels:
Software Development,
Unit Testing
Thursday, February 17, 2011
Borders' demise - sad - what'll happen to those DRM eBooks?
When a bookshop of this size filed for Chapter 11, it is always a sad affair not only the disappearance of a place for browsing books but also for those staff that work there.
I can see those shops in Australia closing because it is just not as competitive as buying from Amazon, even including paying for postage and handling fee. This is also true even way before Australian dollar reaching parity with the USD. Furthermore, the technical books in those shops are old and are there only as a token gesture. The best part of a bookshop is for browsing and flicking through pages that often stirring up the impulsive buying. If there aren't any new materials why visit them?
I am more interested to know what will happen to those customers' DRM controlled eBooks managed by Borders if it finally closes down? Perhaps this is a warning to these operators - Amazon, Barnes & Nobles, and others that use DRM - you, the bookshop, won't be there forever and my copy of "Gone With the Wind" physical book is there independent of the existence of the shop/enterprise that sold me the copy. They are actually paid for in full by your customer identical to a physical item. It is not yours!
I do not totally subscribe to the theory that eBook kills Borders
I can see those shops in Australia closing because it is just not as competitive as buying from Amazon, even including paying for postage and handling fee. This is also true even way before Australian dollar reaching parity with the USD. Furthermore, the technical books in those shops are old and are there only as a token gesture. The best part of a bookshop is for browsing and flicking through pages that often stirring up the impulsive buying. If there aren't any new materials why visit them?
I am more interested to know what will happen to those customers' DRM controlled eBooks managed by Borders if it finally closes down? Perhaps this is a warning to these operators - Amazon, Barnes & Nobles, and others that use DRM - you, the bookshop, won't be there forever and my copy of "Gone With the Wind" physical book is there independent of the existence of the shop/enterprise that sold me the copy. They are actually paid for in full by your customer identical to a physical item. It is not yours!
I do not totally subscribe to the theory that eBook kills Borders
"Electronic book publishing is going to destroy the major chains. The sort of high volume disposable fiction which is their stock in trade, will migrate almost entirely into electronic form over the next 10 years."There are regular reports that eBook costs more than their hard cover version. Move over, having used a NOOK for several months for reading non-DRM controlled technical eBooks. I can tell you that it is a real struggle to use those eBook readers when one needs to constantly flick between several of those books; they can't handle managing several opened books remembering their locations, etc. Reading a novel is fine but using them for technical research or investigation, give me a stack of physical books anytime or running them on a PC/Notebook.
Sunday, January 23, 2011
Windows 7 Ultimate insult to Tablet owners
Sometimes ago, I reported that Microsoft has decided not to provide tablet recognizers for other languages unless you pay them more money by buying either Ultimate Edition or Enterprise Edition.
The other day, I have the pleasure of using a Tablet running English Windows 7 Home Premium and sadly the pleasure quickly turned sour when I discovered that the TIP was locked into a keyboard when I selected Traditional Chinese. This confirms that Microsoft's early annunciation is true.
On my old P1510 running the trusty XP Tablet edition, I can use the TIP to write TChinese or any other foreign language. Now greedy Microsoft is giving their user an Ultimate insult by forcing them to buy expensive edition so that they can write in foreign language.
No wonder Microsoft is losing market shares on tablet to Apple's iPad as Microsoft's is blinded by chasing money rather than providing their supporters with better support. Now Micro$oft has taken away something that their supporters naturally expect would be in all editions of Windows 7 as they are in XP. Who would have guessed a supposedly 'better' and 'newer' Operating System has fewer supports, unless you pay more? It is just pure greed.
If you are one of those victims of M$ greed and only wanting to write Chinese, by-pass Micro$oft and buy one of this fantastic tool - Penpower Junior. Don't let Microsoft hold you hostage. This tools can be used in Tablet or non-Tablet.
The other day, I have the pleasure of using a Tablet running English Windows 7 Home Premium and sadly the pleasure quickly turned sour when I discovered that the TIP was locked into a keyboard when I selected Traditional Chinese. This confirms that Microsoft's early annunciation is true.
On my old P1510 running the trusty XP Tablet edition, I can use the TIP to write TChinese or any other foreign language. Now greedy Microsoft is giving their user an Ultimate insult by forcing them to buy expensive edition so that they can write in foreign language.
No wonder Microsoft is losing market shares on tablet to Apple's iPad as Microsoft's is blinded by chasing money rather than providing their supporters with better support. Now Micro$oft has taken away something that their supporters naturally expect would be in all editions of Windows 7 as they are in XP. Who would have guessed a supposedly 'better' and 'newer' Operating System has fewer supports, unless you pay more? It is just pure greed.
If you are one of those victims of M$ greed and only wanting to write Chinese, by-pass Micro$oft and buy one of this fantastic tool - Penpower Junior. Don't let Microsoft hold you hostage. This tools can be used in Tablet or non-Tablet.
Monday, January 3, 2011
gpg4win 2.1.0-rc1 making progress
It is nice to report that Gpg4Win 2.1.0-rc1 has made small progress in fixing some of issue unearthed previously with respect to its problem in running in TChinese XP. Now a user running TChinese can use the GPA's user interface to submit passphrase to create a key.
However, the Windows Explorer integration is still failing as reported. Sadly Kleopatra.exe still does not run when the "Language Settings for Non-Unicode Programs" is not set to English.
This Unix-Windows program still has a long way to go to achieve the environmental correctness of Firefox, Thunderbird, or TrueCrypt.
As a result, it is recommended users not to install GpgEX (the Windows Explorer integraion) as it is the very flaky and only works when your "Language Settings for Non-Unicode Programs" is set to English.
GPA is surprisingly usable if you can put up with some very foreign UI and appears to be unaffected by the "Language Settings for Non-Unicode Programs". It also works in TChinese Windows but not internationalized.
It stands out like a sore thumb when other programs in the TChinese XP have localized menu. Surely Unix/Linux is capable of handling Internationalization.
However, the Windows Explorer integration is still failing as reported. Sadly Kleopatra.exe still does not run when the "Language Settings for Non-Unicode Programs" is not set to English.
This Unix-Windows program still has a long way to go to achieve the environmental correctness of Firefox, Thunderbird, or TrueCrypt.
As a result, it is recommended users not to install GpgEX (the Windows Explorer integraion) as it is the very flaky and only works when your "Language Settings for Non-Unicode Programs" is set to English.
GPA is surprisingly usable if you can put up with some very foreign UI and appears to be unaffected by the "Language Settings for Non-Unicode Programs". It also works in TChinese Windows but not internationalized.
It stands out like a sore thumb when other programs in the TChinese XP have localized menu. Surely Unix/Linux is capable of handling Internationalization.
Labels:
gpg,
Internationalization,
Security
Subscribe to:
Posts (Atom)