About this Author
Ernest Miller Ernest Miller pursues research and writing on cyberlaw, intellectual property, and First Amendment issues. Mr. Miller attended the U.S. Naval Academy before attending Yale Law School, where he was president and co-founder of the Law and Technology Society, and founded the technology law and policy news site LawMeme. He is a fellow of the Information Society Project at Yale Law School. Ernest Miller's blog postings can also be found @

Listen to the weekly audio edition on IT Conversations:
The Importance Of ... Law and IT.

Feel free to contact me about articles, websites and etc. you think I may find of interest. I'm also available for consulting work and speaking engagements. Email: ernest.miller 8T

Amazon Honor System Click Here to Pay Learn More

In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

The Importance of...

« Rent-Seeking CEO Gives Donation to Politician Days Before Favorable Bill Introduced - 'No Coincidence' Says Senator | Main | Thank You Anonymous Donor »

May 28, 2005

Issues of Future Copyright

Posted by Ernest Miller

Findlaw's Julie Hilden looks through a mirror, darkly, at copyright law issues of the future (Will the Future Bring Even More Important Copyright Issues Than The Ones Raised by Online File-Swapping?). Hilden is writing in response to the 8-minute web presentation, Evolving Personalized Information Construct. If you haven't seen this bit of futurism, please do.

In any case, Hilden is responding to this (Summary of the World: Googlezon snd the Newsmasters EPIC):

Googlezon finally checkmates Microsoft with features the software giant cannot match. Using a new algorithm, Googlezon’s computers construct news stories dynamically, stripping sentences and facts from all content sources and recombining them. The computer writes a news story for every user. [emphasis in original]
The question is whether such a service would violate copyright. The fact-stripping is clearly legal under Feist, but I'm not really sure why this technology would have to strip sentences. If the system is smart enough to recognize facts, likely it'll be smart enough to produce rudimentary text in which to embed those facts. After all, isn't that the promise of the semantic web? Frankly, I don't think this will be a big concern.

Heck, you could probably do something pretty sophisticated with weather data today. I propose a variation on the Turing Test for weatherpeople. How long before virtual weatherpeople can produce what seems to be a live weathercast, based solely on the data fed to the system from the National Weather Service?

But Hilden is right that copyright law is still on a collision course with the internet. I'll say it again. Google Print is now giving us complete access to every book in the public domain, fully searchable, fully linkable, what we always imagined the Heavenly Library would be like. Unfortunately, everything in the public domain means everything published before 1923, because there is no easy and efficient way to figure out whether something published in 1923 or later is out of copyright. The transaction costs are too high and will remain incredibly high, especially for those works in the long tail (which doesn't mean they aren't valuable, and certainly are quite valuable in the aggregate). Anything except fixed copyright terms, or some sort of formalization will be necessary to resolve this god awful mess, so that we can continue to input the work of humanity in to Google Print and all the archive family.

Hilden focuses on copying, the right of reproduction:

The issues are as simple and fundamental as they are troubling: Exactly how much content may be copied on the Internet - and of what kind -- before copyright is infringed? And more deeply, when is content "copied" in the first place when it comes to the Internet? Does the fact that the copying is done via a machine editor - not a human editor - make a difference?
Copies, copies, copies. That is sooo 20th century. Computers make copies, that is what they do. I imagine, but don't know the technical details, that Google's ginormous database of books has numerous complete copies of the works stored, and not just as backups, either. So what?

We can waste all our time trying to figure out how many angels dance on the head of a pin as develop archane rules on when copies are made and whether those particular copies violate copyright, or we can think about information as a flow, as a transfer, as a distribution. The question shouldn't be whether particular "copies" are illicit, but whether particular distributions of information are illicit. Information exists in a transfer or potential transfer, not as a static thing. "Copies" are static things. "Distribution" is about transfer or potential transfer of information.

We can imagine copyright as voltage and current. When thinking about electricity, we don't think about static electricity, we think about circuits, about regulating the flow of current, arranging for particular potential differences. We don't think of current as a thing to be copied.

Does information want to be free? Yes, but only in the same way that all potential differences want to be in balance. We can get some work out of this fact.

Hilden is completely right in her conclusion, however:

Copyright is meant, in large part, to protect the market for a given work, and thus to protect incentives to create new works. Yet allowing people to read (for free) a fact-stripping bot's compilation of news might undermine the market for newspapers and their online outposts. And that may lead newspapers to fight back in Congress for a broader version of copyright that would end, or limit, the reign of fact-stripping bots.
Copyright holders are going to fight any rationalization of copyright tooth and nail, if it hurts their interests.

via Copyfight

Comments (1) + TrackBacks (0) | Category: Copyright | Network Law


1. Elas Giordano on May 29, 2005 09:32 AM writes...

The best way of providing incentive to creators in the 1700's and earlier, and the best way to provide incentive today, in an information age, just aren't the same thing. How could they be?

One alternative is to tax (think the British TV tax that pays for the BBC), survey actual use, and then reward the authors. Maybe there are better ways, but that's one proven method.

Would the founders of copyright law have created this particular system in a world with the internet and bit torrent? It's wildly unlikely. They would have proceeded to directly to some system that rewarded the authors, since restricting distribution was no longer economically and technologically reasonable.

Permalink to Comment


Email this entry to:

Your email address:

Message (optional):

Kitchen Academy - Course II - Day 23
Kitchen Academy - Course II - Day 22
Kitchen Academy - Course II - Day 21
Kitchen Academy - The Hollywood Cookbook and Guest Chef Michael Montilla - March 18th
Kitchen Academy - Course II - Day 20
Kitchen Academy - Course II - Day 19
Kitchen Academy - Course II - Day 18
Salsa Verde