Blockchain: Fad for Innovative Technology?

Recently, I wrote about how blockchains work. However, I did not really delve into what it is good for? Maybe the answer is (like war) absolutely nothing!

As is so often the case in the real world, the truth is somewhere in-between. Blockchain solves a specific, real problem. However, it has also become a shiny new buzzword and is used to promote some very dubious offerings. There is so much fraud and crime surrounding blockchain-based work that stories about hacks, such as cryptojacking – where a website installs malicious software into your web browser to “earn money” while solving blockchain problems.

The technology has expanded rapidly, with numerous uses of blockchain, both benign and malign in appearance. Some actually make a lot of sense to me. For example:

For example, an old friend reached out to talk about his latest venture, This is an intriguing example of a class of uses that I have seen for blockchains that make sense. In this case, it provides an interesting technology for improving the timeshare business. Buying and selling timeshares is difficult for many reasons, several of which can be addressed using a blockchain technology solution. For example, three reasons listed in this online article, 9 Reasons Why Timeshares Are a Bad Investment, can be solved by a solution like my friend’s: it makes ownership more transparent, which can be used to ensure the person selling the timeshare truly owns it; simplifies temporary rental of it, and; encouraging a better resales process. Of course, this does not fix all of the issues, but it is an excellent example of good use.

Another case is one I suggested to add value to a friend’s work on performing “livestock facial recognition.” Such a system could be combined with a blockchain representing ownership of the given animal, providing provenance (the chain of ownership,) ease of transferability, and better tools for preventing theft. Again, this is not something we can do yet, but the technology is far enough along to make sense, and it solves a real problem.

Other uses of blockchain technology are more challenging to evaluate. For example, the Ethereum blockchain technology model is widely used because it provides abilities beyond the basic blockchain idea. A crucial part of that is the idea it can contain a contract. While businesses routinely use contracts today, such agreements are written in natural language formats that can have ambiguity. An Ethereum “smart contract” is written in a language that has a specific definition of its behavior. It enables someone writing a contract to formally validate that the agreement does what is expected. That is surprisingly hard – after all, we have lots of programmers writing many programs, yet we routinely find they struggle to “get them right.”

One specific example of a smart contract that I keep running into is the “non-fungible token.” The term “fungible” might be familiar to you, or perhaps it is slightly vague. Essentially, it captures that something can be replaced with an “equivalent” object. In cooking, many things are fungible: you can substitute margarine for butter, for example. The results aren’t necessarily identical. Some things are not substitutable. For example, a unique cultural or architectural element, such as the Mona Lisa, does not have any substitute. Thus, a “non-fungible token” is a “token” (entry on the blockchain) that represents verifiable ownership of something. This is the opposite of cryptocurrency. Indeed, few of us worry about the specific currency we have – if it is a £20 note, it is likely just as good as any other combination of currency adding up to the same amount. Of course, sometimes specific currency units become valuable for some reason, such as when they are misprinted in a way that makes them unique and interesting.

Personally, I have mixed feelings about NFTs. The scheme I suggested earlier with cows and timeshares makes sense. There are blockchain-based land title registries, which I think are a great use of distributed ledger technology. The challenge with the generic term “NFT” is that you need to understand what the NFT represents to determine if it has value. For example, I suggested NFTs that could represent ownership of a real thing, but many of the NFTs being marketed represent a reference to a virtual object. If the object is itself part of the NFT, say a digital image, and the ownership of the image is transferred, it might have value. Then again, that signed first edition of The Shining has value as well, but it does not give you the rights to do anything beyond owning that one copy. In other words, the right to create copies or derivatives need not be what was sold as part of the NFT. Thus, if you buy an NFT, you might find yourself asking if you bought a usable template for making bags of poo, or just a bag of poo itself, or a URL that points to a picture that someone made of a bag of poo that anyone else can use or access. The value of this is something you can leave on your own.

From my perspective, the interesting aspect of all this is trying to break things down and explain the process: what is a blockchain, what is a smart contract, what is a Turing Complete language like Solidity, how do these get used, etc. While potentially complicated, I have found most people can understand the basics. From that basic model, it’s then possible to explore some specific issues, whether it is for crypto-currency, smart contracts, NFTs, or any other uses that people keep finding for blockchain.

I expect that as the interest in NFTs continues to expand, I’ll have more opportunities to put my skills to good use, explaining how these technologies work and applying that to the legal cases that continue to arise around them.


Improving Patent Family Value

As an inventor, one of the things I did not appreciate is how to maximize the value of a patent family. I suspect that one reason for this is that the attorney with whom I did much of my work focused on drafting the patent and nursing it through the prosecution process (note: “prosecution” in this use means “getting it through the patent process” not “enforce it.”)

Since that time, I have worked with litigators and patent brokers. Litigators taught me that patent owners could use one trick to “keep the patent prosecution alive,” which means that the patent owner continues to submit new claims against the original specification. From a litigation perspective, the patent owner can file new claims using the original specification and, if successful, have a patent that can then be enforced against potential infringers. Brokers taught me that the value of a patent is much higher if a potential buyer can file new claims on the original specification because it makes the patent family far more valuable in potential litigation.

Multiple patents against the same specification share a common priority date and a common expiration date. Usually, multiple patents against the same specification are considered a “family” of patents.

One good example of this is a well-known patent owned by Leland Stanford Jr. University (most people call it “Stanford,” though.) This is US Patent 6,285,999. It is a seminal patent because it provides the original description (“teaching” in patent parlance) of ranking web pages based upon how many other web pages are referenced. The algorithm is commonly used in my area of computer science (“systems”) and is referred to as PageRank. In addition, PageRank is well known enough that it has its own Wikipedia page.

On January 10, 1997, the original specification was filed as provisional application US3520597P. Thus, this is the “priority date” of the subsequent patent applications because they are all based upon the same common specification.

If you review the history of this patent, the first actual application was filed on January 9, 1998, the last day the provisional application was valid (that period of validity was one year; as far as I know, it still is.) The patent (6,285,999) was granted on September 4, 2001. The “Notice of Allowance” from the patent office was issued on April 23, 2001. The patent issue fee was paid on July 11, 2001. The second application was filed on July 2, 2001.

Because the second application was filed before the patent was issued, it “continued” the application process against the original specification. This process was repeated ten additional times. Thus, 12 different applications were filed against the same specification. The most recent application was awarded a patent on May 13, 2014 (8,725,726).

If there is no active continuation application on file with the USPTO when a patent issues, that specification is complete. Therefore, it is now part of the “prior art,” and no future patent claims can be inferred from that original specification.

Bottom line? Suppose you want to maximize the profit potential of your patents (as an inventor). In that case, it is good to keep an application open as it allows you (or a subsequent owner of the patent) to file an additional application focused on specific claims that can then be used to protect your invention.

I realize some people may not be familiar with Pagerank. However, this algorithm is the basis of the technology that launched Google. Larry Page, the inventor, was a graduate student at Stanford at the time. Thus, this is likely one of the most valuable patents ever granted.



Much of my work relates to meta-data. That is, “data about data.” For example, the name, size, and creation date of a given file is a form of meta-data. One of the areas of computer technology I have been working in for decades is storage, particularly the part of storage that converts physical storage (local or remote) into logical storage.

Usually, we call the software that converts physical storage into logical storage a file system. One significant benefit of using file systems is that they provide a (mostly) uniform model for accessing “unstructured data” (files).

Traditionally, we organize files into directories. Directories, in turn, can be categorized into other directories. This is then presented to users as a hierarchical information tree, starting with a “root” and then descending, with each directory containing more directories and other files.

I have already mentioned a few classes of information maintained by file systems: name, size, creation date. Many file systems also provide additional information (meta-data) about files, including:

  • Who can access this file?
  • When was the file last modified (note that this is distinct from when it was created)?
  • When was the file last accessed (often without being modified)?
  • Can the file be written (the “read-only” bit is quite common)?
  • Is the file encrypted?
  • Is the file compressed?
  • Is the file stored locally?
  • Are there special tags (“extended attributes”) applied to the file?

Not all file systems support all these different meta-data elements. For example, some file systems have limitations, such as timestamps that are only accurate to the nearest few seconds; it’s typical only to update the “last access” time once an hour (or longer). This is because there is a cost associated with changing that information that can have a measurable impact on the file system’s performance.

File systems are not the only place where we find meta-data. For example, when you take a photograph with your camera or your phone, it usually stores this in a standard format such as JPEG and other image formats. For image formats, this is known as the Exchangeable Image File Format (EXIF). Information here, which has changed over time and may not necessarily be recorded (it depends upon the device taking the photo, for example), includes timestamps, camera settings, possibly a thumbnail, copyright information, and geo-location data.

Analyzing and understanding meta-data can be directly helpful when it comes to looking at image files. Ironically, when the meta-data for an image is consistent, you can’t tell if it has been tampered with. Yet, when the meta-data for an image is inconsistent, you can reasonably conclude that the image has been modified in some way.

For example, a case that came up for me a couple of years back asked me to review another expert’s report. That expert stated they had a copy of the file as extracted from a hard disk drive, and they had it from a compact flash device. The meta-data varied between the two files.

The version of the image on the hard disk showed:

  • File system modification was November 10, 2005, 20:25:04
  • EXIF creation was November 10, 2005, 20:25:04
  • EXIF CreatorTool was Photoshop Adobe Elements 3.0
  • EXIF Model was Canon EOS 20D

The version of the image on the compact flash (CF) device showed timestamps of:

  • File system modification was November 10, 2005, 20:25:04
  • File system creation was November 10, 2005, 20:25:04

The expert report did not indicate what the EXIF data of the original file showed. However, what was clear is that the image had been loaded into Adobe Elements 3.0 (which, interestingly enough, was distributed with the Canon EOS 20D). While I did not have a Canon EOS 20D to verify (if it had been my report, I would have suggested doing so) and thus could not confirm that it didn’t write “Photoshop Adobe Elements 3.0” into the EXIF meta-data, I did not think that was likely (and the other expert stated it did not).

So, I was able to conclude that “the meta-data on the image is consistent with it being modified.” Why?

  • The name of the application was written into the image. Thus, at a minimum, the image’s meta-data was modified, even if the actual contents were not modified (remember, I didn’t have the original images; I was just looking at meta-data).
  • The timestamps were identical between the CF copy and the hard drive copy. When an application modifies a file, it usually does so to a new copy and then renames the new copy of the file to the old copy of the file. But then the timestamps would normally not be modified back to the original timestamps. But, of course, the application might do that. So, again, if I had been writing the expert report, I’d have tested to make sure Elements 3.0 worked as I expected it would. Since the original expert stated it did, I was able to concur with that expert’s analysis.
  • If an application overwrites the existing file, the creation timestamp and the modification timestamp will differ.

EXIF meta-data can be modified – I use Photoshop to look at and modify meta-data sometimes (e.g., to add copyright or strip out geo-location information before I post the photo). Still, the file system wouldn’t modify it.

File system meta-data can be modified – an application can invoke operating system calls and change those timestamps, but

I decided to check what information Photoshop shows me now. It uses the newer (and more general/extensible) XMP meta-data format:

XMP Meta-data from a PNG file that I created

And here are the file system timestamps for that file:

Native timestamp information from the system where the data is stored

Notice that the access timestamp has been updated (because I read it with Notice that the access timestamp has been updated (because I read it with Photoshop) but the modify and change times have not been updated. Since this was a Linux system, I had to dig a bit more to extract the creation timestamp (the Ext4 file system stores the creation timestamp, but most utilities use an older interface that does not make it available)

Extracting the creation timestamp on my Linux system

As you can see, the other timestamps also match, and the original creation time (“crtime” versus “change time,” which is shown as “ctime”) is the same as the modified time.

Thus, I know that the application created and wrote the file in succession – notice that the creation time and modified time are slightly different (that second value is in nanoseconds, so it is too small to show up when displayed as an “accurate to the nearest second” display). However, the creation time is slightly smaller than the modified time. Then the change time is a second later. This is precisely what I’d expect to see:

  • The application creates a new file with a temporary name. This sets the creation timestamp of the file.
  • The application writes data to the new file. This sets the modified timestamp of the file.
  • Application renames the temporary named file to the final named file. This is a change to the file meta-data, which updates the change time. Since the file contents did not change, the modified timestamp doesn’t change. That access timestamp is today, as I opened the file to look at its meta-data.

Meta-data tells a story; it isn’t necessarily inviolable, but modifying it in a consistent way with “how things work” is more complicated than one might imagine. As our computer systems have become more sophisticated, our mechanisms for verifying meta-data have similarly improved. For example, it used to be that the “state of the art” in signing a document was to sign it physically. If you were paranoid, you might initial each page, which made it more challenging to modify. Today, you can digitally sign a PDF document; that signature covers the document’s content and includes a timestamp along with a unique signature associated with the signing person. At present, faking such a digital signature is out of reach and modifying the actual document is impractical. That’s the power of combining meta-data with digital signatures.


The Journey Begins

Thanks for joining me!

Good company in a journey makes the way seem shorter. — Izaak Walton

Jump through the portal

This blog, unlike those I have done before, is focused on my consulting work in the litigation support domain. Since I am working with technology, I thought I would start looking at interesting patents, which I find through the patent dispute process, and discuss them in the context of how I would approach them as an expert.

I am the primary inventor on 11 US patents in the technology space. I was personally involved in their prosecution. I have been involved in several patent disputes in the past and while I have yet to testify at trial, I have been through the other stages of the process.

In addition to inventing those patents, I also owned them for a while, as they were assigned to me after leaving my last company. I ultimately went through the patent disposition process as well, working with a broker to sell them. Each aspect of my involvement in the patent process has taught me quite a bit about how it works.

In the coming weeks and months I’ll be sharing different aspects of the patent process from my own unique perspective. I expect to discuss:

  • An Expert’s perspective on patent litigation. I get a daily report of new patent cases filed in the United States from the folks at Rational Patent (RPX). In all fairness, I don’t have time to go through all the complaints filed on a given day normally, so I pick those that look of interest to me.
  • My perspective on patent prosecution. What distinguishes a good patent from a bad patent from my perspective as an expert as well as someone who has gone through more than a dozen patent prosecutions.
  • My experiences in monetizing my patents. For small inventors, this can be one of the most challenging aspects of the patent process. Indeed, it is only by going through the process that I’ve learned quite a bit about the process and how it works.

As an expert, one of my goals is to help demystify technology as much as possible. Arthur C. Clarke said: Any sufficiently advanced technology is indistinguishable from magic. My goal is to demystify the technology, so it is no longer magic. Hence, my tag line, a portmanteau to honor Clarke’s memory: Any sufficiently advanced magic is indistinguishable from technology.