Skip to content

Into the Fray Uber Leaps…

August 19, 2016

I read today that Uber is planning on introducing autonomous cars into the Pittsburg area soon.  This is key for them as a significant cost for Uber is the required human driver.  If they could eliminate the driver from the car Uber would be cheap, potentially cheaper than owning and driving your own car.

Sounds great, right?

Well, if you refer back to an earlier post about Google’s self-driving car you will note that I made a big deal out of the difference between a rules-based system and a Strong AI based system.  My opinion on this hasn’t really changed and I would suggest people in Pittsburg might want to take heed of this.

Today, with what we know about computing and computing systems, it is possible for an army of people (literally) to code up a system which is rules-based.  There are a number of examples of this having been done already on small and large scale.  One particularly interesting system that is rules-based is those that are used for high-frequency trading on various stock exchanges.  Yes, it is fairly simple to have something that follows rules that say “When conditions X, Y and Z are met, do ABC.”  That is the essence of a high-frequency trading system, and these systems are very, very effective at making money.

The problem is, the goal of the high-frequency trading system is to make money.  This is done by having lots of rules and lots of things that result from these rules.  All the time people are adding conditions to the list and extending how these systems behave.  To the outsider, these systems might be considered to be “smart” or even “intelligent”.  But that is far, far from the case.

The big difference with a self-driving car is the goals are different – and not just in magnitude.  You have multiple, often conflicting goals.  Sure, you want to move the car from A to B, but you also want to do so safely, following traffic laws and avoiding hazards.  The problem with a static rules-based system is the environment is changing all the time, and unlike the high-frequency trading situation, you can’t just ignore conditions you don’t understand until things change.  It is necessary to always have a safe response to every situation.

Where Google’s car fell down on the job previously was they didn’t have enough rules to account properly (and safely) for a bus.  Uber sounds like they are running into the same sort of limitations, although they are putting an engineer in the car to “help” it along to start with.  The problem is, with a rules-based system in a chaotic system there are never enough rules.  Whenever a unforeseen situation is encountered and there are no rules, a rules-based system fails to be safe because it has no fallback.  It sounds like Uber’s system simply stops and demands a human intervene when this happens.

An AI-based system would have a fallback and likely it would be safe.  The problem is, we aren’t there yet.  Nobody has built a AI system that is anywhere near capable of dealing with driving a car and we are likely decades (at least) away from being able to do any such thing.  The problems are legion, and we have been working pretty hard at it since at least 1970 if not earlier.  This isn’t a processor capacity or speed challenge, so faster computers are not the answer; sure they help, but that isn’t the real underlying issue.

What both Google and Uber are almost certainly doing is trying to win the race to having a self-driving car without having to wait for the AI folks to come up with the right answer.  So they are taking a shortcut and using a rules-based approach so they can get their cars moving on our streets.  Sounds like a real business-oriented approach, except this is an issue that is going to have consequences far, far wider than the boardroom or the accounting department.  When a rules-based car runs over a child (and this will almost certainly happen), we will begin to understand the consequences of letting a businessman decide they need to win this race without an AI system behind it.

Email Privacy Issues

April 29, 2016

I while back I wrote a post that mentioned an old law and some considerations for email handling if you are at all concerned about privacy for you and your business.  You can review that post here.

I ran across an item today that indicates others are also interested in this issue perhaps will change the perspective on this in the future.  You can read the Ars Technica article here.  Basically, what it comes down to is that apparently the government folks have noticed that if you use a cloud-based email service that your privacy rights today are far less than they are if you have your own email server.  Removing the ability for law enforcement and civil discovery to access your emails without a search warrant or subpoena is the subject of a new law that has passed the House of Representatives.

However, this doesn’t mean anything has changed yet.  The bill still needs to be passed by the Senate and signed by the President.  In today’s climate it is uncertain if this is going to happen or happen anytime soon.

For now, if you have any reason to be concerned that emails might be accessed as part of civil discovery or by law enforcement you should not rely on cloud email services.  The difference between Gmail and your own email server is still that it takes a lot more effort and court approvals to get email from your own server.  Email sitting on Gmail or other services after six months has zero protection currently and is open to “fishing expeditions”.

Critical Design Flaws Creep In

March 22, 2016

google-car-crashThis is somewhat of an ill-informed rant.  I have no connection to the project discussed here other than having read some rather distressing things on a non-technical web site.  However, I think the general theme is a good one and could be instructive to a number of people out there.  If you disagree please leave a comment.

I read an article discussing Google’s self-driving vehicle program where a self-driving car failed to properly recognize a bus and was involved in a collision.  It was stated in the article, copied from somewhere else, that to correct this problem an additional 3500 tests were being added to properly recognize a bus and deal with it.

I am going to say that based on this little tidbit of information, which is admittedly probably not very accurate, this sounds like sheer insanity to me.  It also sounds like a scheme that could be cooked up by some folks with little programming experience, led by some more people with little programming experience and utterly no sense of “history” in the real world.

Why is This Insane?

Let’s just briefly discuss the idea of a self-driving car that has a lot of “tests” for different sorts of objects that might be encountered.  Also, just for the purposes of this discussion, let’s assume we are talking about a car on a limited-access highway otherwise known as an “Interstate” in the US.  These are roads where the only sorts of vehicles that are supposed to be there are cars, trucks and motorcycles.  No people walking, no dogs, no bicycles, literally just cars, trucks and motorcycles.

So how do you set about building a self-driving system?  Well,  one way is to start down the road of categorizing objects by subjecting them to various tests to identify them.  This would involve setting down the characteristics of cars, trucks and motorcycles that could be observed using cameras and possibly other sensors.  You could then fairly easily run through a bunch of identification tests to determine what something is and then control the behavior from that.  Sounds fairly straightforward, right?

Well, there is a serious problem with this and they would appear to be on the verge of discovering what this problem is over at Google’s self-driving works.  They had a car that crashed into a bus because it (apparently) was not identified as the sort of vehicle that the car would have to stay away from.  The article said the car assumed the bus would yield to the car rather than the other way around.

This says two things: unlike an experienced human driver, they are assuming other objects on the road will unconditionally yield to the car under some circumstances, and perhaps most importantly, failure to identify an object leads to incorrect assumptions.

I am going to say here that after many years of driving on Interstate roads that you can – and will – find way more objects than simply cars, trucks, and motorcycles present.  There will be furniture, dead animals, live animals, various household goods in various states of repair and well, just about anything you can imagine.  There will be people walking and riding bicycles.  There will be animals trying to cross the road.  There will be animals chasing the cars.  I could go on and on, but I won’t; consider the point made.

RaccoonThe problem here is that if a misidentified object required the addition of 3500 tests to get it right, what are they going to do with the first painted-over raccoon that is encountered?  What do you think the chances are that something like this would be “properly identified?”  Having a hard and fast rules-based system for categorizing objects sounds initially like a great way to do things – especially when you don’t have a strong AI system sitting around waiting for something to do, but it will absolutely be an utter failure.  Not only are there apparently problems with not treating every single object that is encountered as a potential threat, but the need to categorize an object in order to determine the behavior will lead to dramatic failures.

In a car, as many unfortunate teenager’s would attest to (if they could), dramatic failures cost lives.

How Can This be Fixed?

I am going to say that if they have started out with a rules-based approach for interacting with the real world, they will eventually figure out that this approach has far too many tests required and every “new” object that is encountered will require more and more tests.  Eventually, regardless of how much computing power you throw at the problems, there simply will not be enough computing power to deal with the tests.

Can you fix this?  Not really.  A rules-based approach can work when you can limit the universe of things that need to be dealt with.  In the real world, out on a road, you can’t.  What happens when a large bug impacts the cover over the camera lens?  Well, people can be trained to do something about it but with a rules-based system all you can do is identify the situation and have a programmed response.  Eventually you are going to run out of room to set up these programmed responses because the “situations” that can be encountered are really unlimited.  And this is assuming a limited-access highway, not a busy suburban street where the children, dogs, bicycles, etc. are supposed to be there sharing the roadway.

So I’d say eventually they will rediscover the wheel – because we have been through all this before in different ways – and figure out that a rules-based approach isn’t going to work at all, ever.  Should such a thing ever be released on the roadways of the US I would strongly recommend avoiding potential encounters, taking whatever steps are required – such as moving to rural Scotland – before such an encounter happens where people die.

What is the “Right” Way?

Assuming there is one, right?  Well, there is.  And it is something that we have pretty much known since the 1970s.  The idea that open-ended questions like “What is this object?” can be answered by rule-based computations has been pretty much rejected.  At least partly this is because it has been recognized that a human, or even lots of humans, aren’t going to be able to come up with all the rules in advance.  So whatever you build cannot really be a static system.  In general, a more learning oriented approach has been thought to be necessary to eliminate the requirement that a static system “know” everything about its environment in advance.

It would seem that Google has the foundation for both machine learning and utilizing the Internet to facilitate “crowdsourcing” of machine learning.  So why did they apparently design a static rule-based system?  Who knows?  My opinion is that this is the result of hiring people with little experience and little knowledge of the history of machine learning, artificial intelligence and the problems with static systems in dynamic environments.

Something that dates even earlier than the 1970s about this which is something that I would assume is required knowledge for anyone even skating around the edges of this field of computer science and programming is the early writings of Isaac Asimov.  His positronic robots were built with three “rules” and everything else was apparently learned.  There are a number of references to this in his works over the years.  A self-driving car is something that we should be far more concerned about being built by someone with no knowledge of the “Three Laws” than a simple industrial robot.

Wider Considerations

I think it can be assumed that Google is exercising “common business sense” with their self-driving car program.  They want to produce tangible results as soon as possible with a minimum investment.  We have been working on “strong AI” of the sort that is going to be eventually required for a self-driving car since at least the 1970s and what, exactly, do we have to show for it?  Well, we aren’t talking to our computers and natural language speech recognition is one of the first applications of “strong AI”.

So, I guess Google can be forgiven for trying to create a self-driving car and side-stepping building strong AI as part of the project.

However, a lesson for all software projects is that you need to understand the environment the result of your creation is expected to function in.  Failure to do so results in an unusable system which in some cases (not many) could actually be dangerous.  If you don’t understand the environment, you are going to make some mistakes that might be seen later as being really obvious.

Also, never, ever underestimate the importance of history.  No, I am not talking about knowing the complete list of English monarchs from Edward the Confessor forward.  I am referring to the idea that since the beginning of computing in the 1940s there have been a lot of people involved with everything from file systems, error recovery and even artificial intelligence.  Assuming these people had nothing to contribute and all of their work is “obsolete” is a huge mistake and one that will come back to bite you.  I have seen this happen over and over in software development and it can really hurt.

Phone Systems: POTS vs. VOIP

February 8, 2016

IP телефон Cisco SPA303I keep seeing spammy emails that apparently are trying to convince people that the time is NOW for them to convert to a VOIP phone system.  Of course, they want to sell you a particular system as well.  I thought I would write this to perhaps add some information for people thinking about this.

Most of the information here will be of use to people that have not already taken the plunge to some type of VOIP phone system.

VOIP is a phone interface method that uses the Internet instead of a telephone landline.  SIP is the protocol used by VOIP.  PBX stands for Private Branch Exchange which means you have your own phone switch for your extensions instead of a phone line for each one.

If you have a PBX switch today or are just relying on the phone company (Centrex, for example) faxing is pretty simple.  You probably have a dedicated phone line for sending and receiving faxes and it just works.  You put paper in and paper comes out at the other end.  Understand that this doesn’t work this way with VOIP phone systems.  We did not understand this at all when we did our conversion from a Panasonic PBX system and it made things pretty difficult.

If you have any fax traffic at all, the simplest solution is to simply retain a POTS phone line for the fax machine.  It will continue to “just work” that way.  If someone says you can plug your fax machine into a socket on your new VOIP/SIP PBX switch forget it!  It will not work that way at all.  Even if someone demonstrates to you that it works, it will be with some very special equipment at both ends.  Random people sending you faxes will not work and you will likely not be able to send anyone a fax this way either.  It has been tried, over and over.

It is important to understand the differences between a VOIP phone system and a SIP PBX switch.  It is possible to utilize older phones with VOIP adapter boxes (think Vonage) in an office.  You could take your old phone switch and simply connect it to one or more VOIP adapters and Voila! you are using VOIP.  This isn’t difficult and it isn’t really expensive to do, but the features are really limited.  And you still need phone wiring to do it.

We did our conversion when we moved.  It meant that instead of needing phone wiring to each location as well as computer network wiring the only thing that was needed was the network wiring.  This was a considerable savings and should be for anyone, since a huge part of setting up an office is getting the phone wiring in place.  A VOIP/SIP PBX eliminates all of that.

There is one important consideration, however.  Everyone will tell you that you only need one network wire for each computer/phone location.  This is somewhat true but it somewhat obsolete information.  Today, most computers can utilize a Gigabit Ethernet connection for higher speed networking.  However, the phones are still in 10/100 Ethernet mode.  If you plug your computer into the phone and the phone into the wall – the suggest configuration – you will only need one wire but this will disable Gigabit Ethernet for the computer.  There are two ways of dealing with this: two network wires, one for the computer and one for the phone, or a small Gigabit Ethernet switch at each location.  If most users do not need Gigabit Ethernet and the “power users” may need more than 1 network connection anyway (laptop, multiple computers, etc.) the local switch is probably the way to go.

You may have heard of SIP hosting.  This similar in concept to Centrex where instead of having to buy an expensive phone switch you simply lease it and someone else manages it.  There are two things to consider with this.  First, the management isn’t all that complicated or intense and is probably a lot simpler than an older PBX system.  Secondly, you will have all of the problems of someone else managing “your” phone system for you.  It won’t be all that cheap and things will not get done the way you might prefer them to.  Finally, a huge reason to not even consider this is VOIP/SIP PBX systems are really cheap.

In short, I don’t recommend SIP hosting.  There are companies that specialize in it, but it won’t be as helpful to getting started as you might think.

Depending on how you want to go, it is possible to have a full-feature phone system with the switch being less then $1000 and around $100 a phone.  You might be able to find phones still cheaper and a DIY phone switch using something like a Mac Mini running Asterisk – an open-source SIP PBX software distribution.  You can also spend $2000-$3000 on a phone system with phones designed for it.  If you want someone to come and set things up for you, this is probably the only way it will happen.  Phones will be more expensive, probably $150-$250 depending on things like the presence of s display and how big the display is.

What are we doing at InfinaDyne?  We bought a cheap ($650) dedicated phone switch rather than using Asterisk on a commodity PC.  The idea, at the time, was that we could use the built-in analog phone interface for a cordless phone and a fax machine.  The fax machine, as I mentioned above, didn’t work out.  The cordless phone is OK and helpful sometimes but I don’t think I would go the dedicated system route again.

We could have bought phones designed for the phone switch at 2-3 times the price of the commodity Cisco phones we did get.  Turns out that we didn’t do all that bad with the cheap phones – once I figured out how to set them up properly.  One feature that we did not have with our Panasonic PBX was conference calling (tying multiple extensions onto a single external call) and we do not have that with our new system either.  I believe we could have this with Asterisk and I would likely take the plunge with something like a Mac mini if we replace the PBX switch.

An important consideration that I was only lightly exposed to is that SIP PBX companies come and go.  Mostly, a lot of the software is based on Asterisk, where all the research was done on how to talk to SIP hosts and phones.  So it doesn’t take a huge investment in coding to get a SIP PBX company up and running.  However, with any small business, a lot of them die an early death.  Or, for their own business reasons, they want to sell you expensive support plans to keep getting software updates.  So check out these companies and try not to spend too much on a SIP PBX.

The last link in the chain is the SIP trunk.  This is how the SIP PBX connects to the outside world through the Internet.  You will find there are quite a few companies out there that offer this but it will come down to around $30 per “line”.  It works just like your old PBX – you have 20 phones in your business but only need three outside lines for simultaneous calls in progress.  Same with SIP trunking.  You will need to have the same number of “trunks” or appearances as you need to have simultaneous calls, and there may be a minimum of three to start with.

The big benefit that you will see immediately with such a system is that while the per-line costs are about the same as for a regular landline phone there are zero long-distance charges.  No minutes, either.


Two features that have been extremely valuable for us with our SIP PBX: call routing and outside calling.

Call routing allows someone calling into the office to be transferred to an “external extension” which can be anything.  Home. Cell phone.  Whatever.  You can set up our switch to do this automatically so if I do not answer my phone (on my desk) it is automatically routed to my cell phone.  I can even restrict this to happening only during the day.

Outside calling means that I can have a SIP phone application on my smartphone (Android or iPhone) and make calls as if I am in the office.  Transparently.  Caller ID at the recipient’s phone says I am in the office.  No phone charges on my phone, assuming I am using a Wi-Fi link to connect to the office.  This answers the question of calling people back from out in the field when you don’t really want them to have your cell phone number.

Most, if not all, SIP PBX systems will support these features.  SIP Hosting probably will not.


The last comment I have on this is E911 service.  Most people that investigate VOIP in some form will hear about this and might worry.  It does sound somewhat complicated.  Your SIP trunk provider will take care of this for you and as long as you understand the limitations and how it works, it will not be a problem.  After all, how often do you really call 911?  I have had an office of my own since 2001 and I have actually called 911 once.  Once in 15 years.  That should put the problem in perspective.

Oh.  Yeah.  Put that SIP PBX on its own UPS!  Do this day one.

Taking out the Trash

December 4, 2015

BigGarbageCanTrash, the digital kind, is always a problem these days.  Whether it is at home or the office, we are collecting huge amounts of digital trash all the time and dealing with it seems like something we should pay attention to.  After all, it is taking up space and that could be expensive, right?

Well, not really.

Seemingly like at the beginning of time, when I was first using computers at home, disk space on hard drives and floppies was expensive.  So it was worth plenty of time and effort to delete every single item that could be deleted to make room for other, important stuff.

Another example comes from the early 1980s when I was at Bell Laboratories in Naperville, IL.  We had a box there that was called “the MSS”.  It was, in reality, an IBM 3850 Mass Storage Subsystem which staged data on tape and put it on rotating disks when needed.  The Naperville Computation Center eventually had two of these devices which totaled around 2TB of storage space.  This was needed to contain the massive growth they were experiencing at the time in materials related to the Number 5 ESS, the latest telephone switching system that was being sold at the time.  I didn’t see a price tag for an MSS, but you can assume the system almost certainly cost millions.  Clearly a justification for “taking out the trash” if I ever saw one.

Things have changed and they have changed in substantial ways.  First off there are now requirements for companies to keep all sorts of digital records.  These have to be kept for years and failure to do so can result in substantial fines and sanctions from the government.  A lot of this information is held in email and not every company has fancy email archival systems to manage this.  There is also just common sense that says you might need the email from a lawyer some time in the future, so maybe you should keep it.  This can be applied to all sorts of things that you might not think of immediately as vitally important right now but could be important later.  So just rippling through Outlook deleting stuff might not be the best idea after all.

Same thing with backups.  You can never have too much backup.  If you haven’t established some sort of backup for your computer(s), you might want to think about what you could lose if “something bad” happened to it.  Not only hard drive failures but also accidental deletion of files and theft of the computer should be things you are considering.  There are online services that are available for backing things up or you can have backup space on your network that will make this pretty simple and transparent.  Whatever you choose to go with, back up your files!

So what about all those digital photos that are lying around collecting digital dust?  Well, it used to be that the preferred storage medium for photos was the “photo album”: a book containing pages and pages of photos.  You might remember this sort of a book being brought out to show your prospective spouse sometime, usually something you would like to forget.  Today it seems most photos end up on people’s phones and the only problem with this is that phones are not usually backed up the way we would like them to be.  Often what backup there is isn’t trivial to access and is outside of our control.  This can, as some people found out, lead to other people being able to access these pictures.

But the biggest thing about these pictures is that you often do not delete anything.  So you have the fuzzy picture that didn’t focus properly and the picture with your finger covering up something.  These are right next to the pictures that did work out and when there are 400 pictures on your phone it may be somewhat intimidating to think about going through them to sort out the good ones and delete the bad ones.

All of this leads to a huge growth in personal data just as Bell Labs was seeing a growth in data back in the 1980s.

The question is what should be done about it.  The answer I would like to suggest is nothing.  It simply isn’t worth your time to do anything about it.

Today, at my office we have a small box with four drives in it.  Each drive can hold nearly 6TB (that is a 6 followed by 12 zeros).  It is configured to use the drives as two sets of two drives for a total capacity of about 11.5TB with 100% redundancy.  If a drive fails, nothing is lost because it is mirrored on the other set; simply replace the drive and keep going.  This box cost about $1400 and holds as much as 10 or 11 of the IBM 3850 MSS devices for 1000 times less money.  At home I have a similar box with two 3TB drives in it (that could easily be upgraded to two 6TB drives).  When it was purchased it cost less than $500.  It has complete redundancy like the office box does and it can even be backed up to online services or other similar boxes if the need arises.

The office box is a great place to put stuff that might come in handy.  It is almost certain that it will never be full, no matter what happens.

The home box is used for all sorts of stuff as well as for backing up computers and such.  It could be easily configured for sharing space with mobile devices to enable backing them up as well.  It has a huge photo library on it and all of our phones have the software for displaying pictures and adding new pictures to this repository.  If it ever gets full, I will just get some bigger drives for it and move everything over.

It is my contention that at this point with the price of storage being what it is there is no point in wasting my time looking for things to delete.  It makes more sense to keep them all.  Lazy?  Maybe, but it is important to sort out what it is worth spending time doing.  Today, I don’t think it makes much sense to spend a lot of time trying to decide between what is important to keep and what is trash.  So taking out the trash just isn’t worth the time and effort of doing it.  Keep the trash!

I am not recommending specific NAS devices here.  There are lots and lots of them out there with different services offered and different capabilities.  It is worth looking at different sorts to make sure you understand what is available and what might be useful.  If you want to know what I am using, leave a comment.

What is a .ART File and Why Do I Care?

November 11, 2015

In the 1990s AOL was a major force on the Internet.  There were people using AOL’s modems to get online and nothing else and there were people making extensive use of AOL’s content.  And everything in between these extremes.  AOL has since faded as a major force and some would say they have been replaced by Netflix, at least in terms of bandwidth consumption.

AOL was pretty interesting in bandwidth, and considering a significant number of their users were using AOL’s modems, they were interested in providing a speedy experience for their users.  Towards this end, AOL acquired a company called Johnson-Grace and integrated a graphics format developed by Johnson-Grace into the AOL experience.  What AOL was doing was translating other graphic image types into a .ART file and transferring these files to the user.  The result was better bandwidth utilization, especially at modem speeds.  Microsoft evidently agreed with this strategy and bundled the Johnson-Grace .ART decoder as a DLL with Windows up until Windows Vista.

It should be noted here that there have been other uses of the .ART extension in addition to Johnson-Grace compression format images.  Evidently this extension applies to files used by some embroidery machines.  It is also used for other things as well now.  So it you have a file with an extension of .ART, it doesn’t necessarily mean that it is a Johnson-Grace compressed image.  You can tell by looking at the file content: all Johnson-Grace files start with the letters “JG” followed by a single byte value from 1 to 4.

Microsoft evidently decided there were corrupted .ART files floating around on the Internet and these would cause a failure in the Johnson-Grace decoder which would then in turn cause Internet Explorer to fail.  To remove this and any potential security risk that an intentionally crafted corrupt .ART file might pose, Microsoft removed the Johnson-Grace decoder DLL from Windows Vista and all further versions of Windows.  Wikipedia also indicates that Internet Explorer had support for the .ART image format removed in 2006.

What this means is that if you have a computer that you upgrade to Windows 7 from Windows XP it may have the decoder DLL present, but if you acquire a new computer with an operating system later than Windows XP it will not have this decoder DLL at all.

What does this mean?  Well, because the Johnson-Grace algorithm(s) were never really disclosed there are no commonly available tools for working with the .ART format other than the decoder that Microsoft supplied.  The Johnson-Grace company was distributing a toolkit for free many years ago, but that ended.  If in the course of an examination you encounter a .ART file, you may not be able to view it without taking some steps to prepare for this.

There are two basic ways of dealing with .ART files, and this is going to depend on exactly what applications you are using and what support they have.  Applications from InfinaDyne know about the Johnson-Grace format and will utilize the decoder DLL.  Other applications may also have this support and work with the .ART format, if you have the decoder DLL.

The other way is to obtain and use an application which directly supports the .ART format.  At this time there are very, very few such applications.  One, which is obtainable, is the original America Online program.  It turns out that up until version 8 their own browser was used and it supports viewing .ART files.  You can obtain version 8 of the America Online program from here.  Note that this program works only on Windows XP and earlier versions of Windows – you cannot run it on Windows 7 or later.

If you can make use of the decoder DLL, the first thing is to get a copy of JGDW400.DLL.  You can download this from a variety of sources – check this list.   The simplest thing to do with this file, once you obtain it, is to put it into the Windows directory on your computer where it will be available to any application that requires it.  If you have only a single application or two that can make use of this DLL it may make more sense to place this DLL into the folder with these applications rather than putting it in the Windows folder.  While it is highly unlikely that this DLL can introduce any meaningful security risk on your computer, forensic workstation or not, if you can isolate it to a few applications there is no reason not to do so.

I do not know what the prevalence of .ART files in forensic examinations might be.  If many people now in 2015 have never heard of a .ART file, there may not be much call for dealing with these things.  However, if you run into one of these files, InfinaDyne still has support for this format if you have the DLL available.  If you have any comments or concerns about .ART files, please respond with comments here.

Fake Flash Memory

November 10, 2015

Recently, I was advising someone that the simplest way for me to transfer around 200GB of data to them was on a large SD card or microSD card.  Boy, was I in for a surprise.

First off, if you are looking at an advertisement for a 256GB microSD card you will be disappointed.  There simply are none.  Well, none that actually have 256GB of storage capacity.  If you look at Amazon (as I did) you will discover there are both pretty expensive cards and some that are incredibly cheap.  Surprise!  The incredibly cheap ones have only a little memory in actuality and have a hacked controller that pretends there is much more storage available than there actually is.

How this works is simple.  You have a card with 2GB of actual storage on it but the controller “pretends” to allow accessing 256GB by simply writing to the same 2GB that is there over and over again.  Now, the way computers and file systems work, you are actually writing information to the directory constantly as you are writing files, so the directory is going to be written out last almost always.  The file data, what there is of it, is going to suffer because of this.  But the way this works allows you to think you have written far more than 2GB of data to the device and when you check the directory it appears to all be there just fine.

When you go to read it you will discover that some of the files have been corrupted.  In fact, everything except for the last 2GB or so will have been corrupted and the directory will likely have corrupted some of that as well.  You can get a sense of this by looking at the listings on Amazon with the customer reviews.  Some customers seem to be happy and these have to be reviews put there by the folks selling these things.  Everyone else is pretty unhappy with their purchase.

Right now, there is a reasonable test program for determining if your USB flash drive has a faked capacity and you can download it from this site.  This requires an empty drive for testing, so if you have anything you believe might be good on it, copy it first.  It is recommended by the author that you quick-format the drive before starting so as to remove any hidden folders or saved “trash” if you have used the drive on a Macintosh computer.

So what do you do if you bought a cheap USB drive or memory card and discover the capacity is faked?  Well, I am pretty sure most people aren’t on the phone to their retailer complaining about it.  In most cases, the retailer will disclaim any responsibility saying they bring a product in and sell it; if the product doesn’t work you can get a refund, minus shipping cost.  Well, if you have a card less than $10 the shipping cost is going to eat up a good part of that return, so you figure why bother?

The result of this is (a) nobody returns them or so few do as to be negligible, and (b) the retailer is uninterested in the quality of the product even if you do return it.  So what do you think should be done about these flash drives and cards?

I think it might be reasonable to develop a testing tool that does not require the drive to be empty in order to be used and can report on some meaningful statistics about the drive as well.  When this is done, if it happens, it will be listed on the consumer software page on the InfinaDyne web site.

If you have a fake flash drive or card I would be very interested in obtaining it for testing purposes.  Click for our address.  If you send me a card, include your email address and I will see that you get a free product from InfinaDyne for your trouble.  I was thinking about buying one intentionally, but this seemed to be rewarding the scammers that make these things.