I Want to Quit the Gym — Err, the Facebook.

If you watched the TV show “Friends,” you might remember the episode in which Chandler tries unsuccessfully to terminate his costly membership at the local gym, but his resolve weakens with each successively less-convincing utterance of “I wanna quit the gym,” until he eventually gives into peer pressure and remains a member.  Facebook is a little like that: I want to quit the Facebook, but I can’t.

Make no mistake, I love social media.  Social media keeps us in touch with one another in a fashion unimaginable a scant decade or two ago and it has the effect of making the world a smaller place — a kind of global village, to use a popular phrase. One of my first forays into making something creative with the Internet was the creation of a kind of social media web page of my own back in 1996, which ultimately lead to rebuilding some of my old friendships and, most importantly, to marrying my wife  (you can read about how that happened here).  If only I had scaled that idea the way Mark Zuckerberg did, I would be writing this on my yacht —  I mean on one of my yacht’s.  Hindsight.

Digital technology has generated a kind of Renaissance in communication.  If you have read some of this blog, you already know that I am fascinated with the Internet and its impact upon the law and upon society.  Through the rapid exchange of information, the Internet has been a touchstone that has ignited the development of a myriad of new technologies that rely upon that simple yet immeasurable quality: the ability to facilitate the exchange of information at the speed of light.

As much as I am captivated by those revolutionary technological developments, I am just as enamored with the little things that have emerged as well: video chat, smart phone Scrabble, and of course, social media. As a tool for creating ways to bring remote people closer together, the Internet is unmatched, and Facebook certainly contributes to that effect.

So how has Facebook been so successful and why do I want to quit? A gross oversimplification of Facebook’s corporate strategy probably looks something like this:

1. Cultivate the free exchange of information between users who are already connected to one another and between users who are not connected;

2. through said exchange, generate new connections and strengthen existing ones, thereby increasing the relevance of and the reliance upon Facebook;

3. market Facebook’s user base to third parties; and

4. profit!

Nothing about that plan sounds inherently inappropriate, but if you look more closely at how Facebook achieves each of those prongs, you might feel some apprehension.  It is almost certainly no mistake that a user’s default Facebook account settings encourage the free dissemination of information — Facebook’s privacy settings are geared that way and those settings encourage users to disseminate information and connect with one another.  Moreover, the user account settings that control the amount of information that appears in one’s Facebook feed tend to be set to “show me everything,” but are more difficult to adjust to “I don’t want to see anything except relevant posts by people known to me.”

Such dissemination of content achieves prong #2 of my overly-simplified version of Facebook’s corporate strategy: it encourages users to discover and create new connections with one another and the more users do so, the more likely they are to use Facebook.  Correspondingly, Facebook becomes more integrated into our culture.  If you have a Facebook account and you have ever thought about abandoning it for any number of reasons (data privacy, time consumption, messages from acquaintances you’d much sooner forget, etc.), but you chose not to, why did you make that decision?  Chances are you kept your account active at least in part because of your close connections — that subset of people among your Facebook “friends” with whom you do not want to stop exchanging content.  Otherwise stated: peer pressure.

Zuckerberg et al recognize that peer pressure is a powerful tool and the more peers you have within a network (whether real or virtual), the less likely you are to leave that network.  One of the simplest ways to generate more peers among users is to disseminate  content, and Facebook’s settings are almost certainly designed to discourage you from streamlining your news feed to limit the content that you view.

I like to view posts written by my friends, but I have little desire to see my friends’ comments upon content posted by others.  For example, I’m happy to read a post by a friend in my feed that says “we closed on our new house today,” but I don’t want to see that same friend’s comments regarding a photo of a third party’s new dog.  However, by default, as soon as my friend makes a comment about a photo of someone’s dog, the photo becomes part of my feed.  Am I able to set my Facebook settings so that I see only posts, but not “likes” and “comments” generated by a particular friend on Facebook? Yes. Am I able to do this for all of my 239 friends simultaneously?  No.  Facebook requires me to adjust that setting for each friend individually, which is entirely too time-consuming.  Alternately, I can retroactively classify every one of those friends as “close friends,” or something other than “close friends,” and then choose what degree of content I  will be fed from each group.  This approach is not only time-consuming, it is also uncertain, as it isn’t clear what content I will actually receive from my “close friends” and what I will receive from those who are not “close.”

This approach is good for Facebook, because users will often simply choose (by not choosing) the default, which leads to a predictably broader dissemination of information.  Other users like me will search in vain for a shortcut to filter content and eventually throw their hands up in surrender (or write a blog post to rant about their frustration) and simply deal with the extraneous content.

But when does the determined push for the free flow of information transition from being simply over-inclusive to genuinely inappropriate? Facebook can always tweak the amount of content that users view: I suspect that no small amount of research has been performed by Zuckerberg’s team to determine the right balance between content that increases connectivity while minimizing extraneous information that overburdens users.  But what about the nature of content?

The latter hit home for me recently when a photo posted by a friend of a friend appeared in my Facebook news feed.  Although I am not “friends” with the poster, the photo appeared in my feed because my friend commented on it.  Only one degree of separation set me apart from the poster of the photo, which seems a small divide.  However, the photo in this case was a very personal one: a teenage boy lay in a hospital bed, connected to an intracranial pressure monitor, and apparently in a coma.  The image was followed by a multitude of very personal comments written by the young man’s family, expressing concern at the likelihood of the teenager’s survival.  I read through several of the comments before realizing that I don’t know the family or the young man in the photo, and I felt immediately intrusive, having inadvertently invaded the privacy of someone with respect to something incredibly personal.

Yes, it is reasonable to argue that the family member who posted the photo should have been more attentive to the manner in which that incredibly sensitive image was shared.  It is even reasonable to suggest that anything shared on the Internet with a limited group has the potential to be distributed to others (intentionally or unintentionally), as the Internet is an extraordinarily efficient vehicle.  However, many users don’t understand Facebook’s somewhat convoluted privacy settings, and as such, content often appears that was never intended for public consumption.  Further, Facebook engenders a sense of security through its design and promotion of its privacy settings.  What is most troublesome to me, however, is the fact that users (like me) who would prevent such content from appearing in their news feeds will find the task cumbersome.

Facebook wants content to be as free as possible, but I think that it can be done better.  Over-inclusion may ultimately be the downfall of Facebook’s platform, as users are exposed to content that eventually leads them to pause and consider more carefully what they are sharing and with whom.  At some point, user discomfort with the nature of Facebook’s model may gain momentum sufficient to overcome peer pressure and that may result in a happy evolution for social media, which is always at risk for being replaced by something better.

In the meantime, I will continue to protest weakly that I want to quit the Facebook.


On Trademark Law: Crack and Cracking Pyrex

Technology blogging site TechDirt recently posted a short article about the difficulties faced by consumers when a product no longer incorporates a feature that the public has come to expect.  In this case, one of the consumer interest groups concerned was a group composed of crack cocaine manufacturers.  If that grabs your attention, go here (or just read on) to watch an entertaining video showing Wolfram’s Theodore Gray taking a blowtorch to a Pyrex container.  The article/video were thought-provoking and they elicited a gut reaction from me that lead to some critical thinking about the dynamics of trademark law vis-à-vis the goals of the law itself.  I will return to that discussion after a little introduction to Pyrex.

Pyrex glass was introduced in 1915 by Corning Glass Works (now Corning Incorporated) and it soon became embraced by scientists and sous chefs alike for its most appealing property: it consisted of borosilicate glass, which is characterized by very low thermal expansion.  That property rendered Pyrex particularly useful for laboratory glassware and for kitchen ware, which must be able to withstand rapid changes in temperature.  Pyrex was a great success and in short order, the mark PYREX came to be associated with quality glass products that could be rapidly heated and cooled without risk of fracture.  Corning marketed this beneficial property and the reputation of its product grew. Although all Pyrex glass at the time was manufactured from borosilicate, Corning’s original trademark registration for PYREX was simply for “glass.”

Despite its broad registration, even the  courts recognized that the PYREX mark had taken on meaning beyond its original goods and services description, as discussed in a 1940 opinion by the Court of Appeals for the Eighth Circuit.  In that case, a glass bottle manufacturer that had been producing prescription medicine bottles under its REX trademark since the late 1800’s was unhappy when one of Corning’s licensees began manufacturing baby bottles bearing the PYREX mark.  At the time of the suit, the prescription bottle manufacturer had expanded its own glassware products to include baby bottles and it felt that PYREX was too close to its REX mark, considering the similarity of the goods.  Notably, the REX bottles were manufactured from ordinary glass, while the Corning licensee’s bottles were made from borosilicate glass.  Subsequently, the REX bottle manufacturer sued the Corning licensee for trademark infringement and Corning itself joined the suit.

After a decision in the district court favoring the REX bottle manufacturer, the Court of Appeals reversed, finding in favor of Corning and noting that “the name ‘Pyrex’ has come to mean, and standard authorities have recognized it as meaning, glassware of a variety ‘resistant to heat, chemicals or electricity.'”   The Eighth Circuit felt that PYREX’s unique thermal properties placed Corning’s product in “an entirely distinct field,” and therefore, in the Court’s view, there was little likelihood of confusion among consumers.  Notably, the Court determined that the plaintiff’s assertion of its rights to a zone of natural market expansion also held no water (milk?).  REX had established itself as a prescription bottle manufacturer and prescription bottles were the sole goods identified in its trademark registration, so it had no right to exclude Corning or its licensees from making baby bottles bearing a similar mark.  The Court noted that it could not accept Plaintiff’s suggestion that the “trademark ‘Rex’ for prescription bottles preserved to it for all time a monopoly not only on prescription bottles made of ordinary glass but on all other bottles of whatever description or quality, and, in fact, upon all glassware.”

Over the next several decades, Corning’s PYREX grew exponentially in fame and it is instantly recognizable by most consumers today.  Or is it?In 1998, Corning spun off its consumer products division, including all of its kitchen ware.  Corning retained ownership of the PYREX trademark registration, but it ceased manufacturing Pyrex products and it licensed the PYREX mark to several other companies.  One of those companies, World Kitchen LLC, continues to manufacture products bearing the PYREX mark in its classic, recognizable form.  However, at some unknown time, Corning and its  licensees stopped making PYREX branded products out of Pyrex (borosilicate) glass, and instead began manufacturing them from less-expensive soda-lime glass, which lacks the low thermal expansion properties of the original product.  This change in product properties was a lesson reportedly learned the hard way by crack cocaine manufacturers, who produce crack by heating up cocaine to high temperatures until it begins to coalesce into little balls or “rocks,” at which point the cocaine must be cooled rapidly.  Rapidly heating and cooling glass which has poor thermal properties results in fracture of the vessel and is apparently very bad for crack business.  I envision astonished crack makers crying out to consumer watchdog agencies everywhere.

So do Corning and its licensees have a responsibility to use the PYREX mark  exclusively on goods made from Pyrex glass, such as it was for more than half a century?  Consumers  (and crack dealers) have come to expect that quality, and Corning for many years promoted that quality through advertisement.

The comment in the aforementioned TechDirt article that triggered my gut reaction appears near the end. It suggests that there has been little outcry about the quiet change in composition of PYREX-branded goods, because

Corning isn’t having a dispute with a competitor . . . Imagine if a counterfeiter were passing off soda lime glass as Pyrex.  The outcry would be huge. Government agencies would be busting down doors and arresting people and using it as a reason to pass ACTA.   But if Corning and their licensees do it under the Pyrex brand, all we can do is shrug.

Colorful characterization aside, the poster is pretty much on the money: if Corning was still producing PYREX-branded products made of borosilicate glass and a competitor was producing a cheaper soda-lime glass product bearing a mark even vaguely reminiscent of PYREX, you can bet that Corning’s first argument for trademark infringement/unfair competition/dilution would be that the cheaper goods would be likely to create consumer confusion and damage the association that consumers have come to make between the PYREX mark and quality glass products.  But since this is an issue that affects only consumers (and the corporations in question are all still profiting), the issue goes largely unnoticed.  Initially, that seems unfair, particularly if you take the consumer welfare view of the justifications for trademark law.  And indeed, consumer welfare is one accepted goal of trademark law: trademarks provide consistency to consumers, especially where quality is concerned.  As the U.S. Supreme Court noted, a trademark “assures a potential customer that this item — the item with this mark — is made by the same producer as other similarly marked items that he or she liked (or disliked) in the past.” Qualitex Co. v. Jacobson Prod’s. Co., 514 U.S. 159, 163–64 (1995).

For example, I know that when I walk into Target to buy a plastic storage bin, I will probably get a better quality product if I choose a Rubbermaid-branded container than I will if I choose the cheaper Sterilite brand.  And in defense of Sterilite, I may even choose the cheaper brand if the circumstances of the purchase are such that price matters more to me than quality.  The key for the consumer is being able to tie a mark to predictable product qualities, simplifying the act of purchasing.

However, most contemporary theorists cite a second justification for trademark law as well, namely that trademark law should protect the producers of goods and services from the misappropriation of the goodwill that they have worked diligently to create.  In other words, “the law helps assure a producer that it (and not an imitating competitor) will reap the financial, reputation-related rewards associated with a desirable product.”  Id.

This trademark owner-centric view alters our discussion a bit.

The Lanham Act implies a requirement that trademark owners maintain a degree of control over the quality of products that are produced by licensees under their mark, an arrangement which, by the way, is extraordinarily common, especially where brand-extension licensing is concerned (Cheetos Lip Balm, anyone?).  This requirement is tied to the consumer welfare theory: consumers come to associate certain qualities with products manufactured under a particular mark, and in order for those associations to be reliable, quality must not vary from manufacturer to manufacturer.  However, as mark licensing has become increasingly common, the judicial system has gradually lowered the bar on the level of quality control or oversight required of the trademark owner.

Perhaps part of the reason for reducing that bar is the fact that the trademark system is at least partially self-correcting.  If, for example, Mercedes-Benz begins manufacturing inexpensive automobiles made with low quality parts, its reputation will likely soon decline and the association that many consumers make between quality automobiles and the MERCEDES-BENZ mark would be lost.  Consumers who desire high quality products will drift away (welfare theory) and businesses like Mercedes-Benz that have failed to continue to invest in the consistency of products branded with their marks will lose that goodwill (trademark owner-centric theory).  The only burp in the system is the reality that trademark owners who alter their marks and lose goodwill do not necessarily lose their marks.  Is that a bad thing? My initial gut instinct is yes, because I have a champion-of-the-consumer mentality, but after some consideration, it doesn’t bother me quite as much.  A company that changes the nature of its goods will eventually gain the attention of consumers and consumers will change their purchasing preferences accordingly.  A kind of mark evolution occurs: a mark that was once associated with a certain quality or characteristic becomes associated with something different.  In this simple version of product change, the risk to consumers is probably mostly short-term, as consumers soon become aware of the change in product quality through word of mouth:

But sometimes that recognition takes time, particularly when a very strong brand has developed a quality reputation through many years of consistent use on the same kind of products:

In the case of Pyrex glassware, the change may be very subtle and discovery may in fact  be dangerous to consumers (including, but not limited, to crack dealers).  I didn’t search Westlaw to determine if there have been any lawsuits lodged against Corning or its licensees for injuries sustained by consumers who used modern Pyrex kitchen ware, but I would not be surprised to find at least one

However, consumer safety is the province of another area of law: torts.  If consumer safety is put at risk because of reasonable (though incorrect) assumptions that consumers are likely to make about product characteristics, then it might be reasonable to simply require a broad (or broader) product safety warning on the product itself.  Certainly consumers are accustomed to such warnings, to the point of numbness from over-inclusion.

At least one issue remains, however, outside the scope of consumer welfare, which I touched upon earlier: competition.  If I own a trademark registration, that registration endows me with the right to exclude others from using a same or similar mark on products of a similar nature.  In the case of a famous trademark (PYREX likely qualifies), I can sometimes even exclude others from using the mark on dissimilar goods.  However, if I fail to use a word or symbol in the manner envisioned by the statute — that is, in a manner that creates a known, consistent association between my goods and the mark — I risk abandoning the mark and others may use the mark for their own products.  This is part of the quid pro quo of trademark law that is characteristic of all intellectual property law: the owner gets a limited monopoly in exchange by using its intellectual property in the manner envisioned by the statute.  If the owner fails to do so, other member of the public (and private sector) have the right to use the intellectual property as well.

In reality, cases in which a mark is treated as abandoned because the owner has made changes in the underlying product are exceedingly rare.  However, as McCarthy notes, gradual changes in product nature are common and consumers expect such changes.  Only “sudden and substantial change in the nature or quality of the goods sold under a mark may so change the nature of the thing symbolized that the mark becomes fraudulent and/or that the original rights are abandoned.”  3 McCarthy on Trademarks and Unfair Competition § 17:24 (4th ed.) (citing Independent Baking Powder Co. v. Boorman, 175 F. 448 (C.C.D. N.J. 1910) (Manufacturer of SOLAR alum baking powder assigned rights to another who substituted phosphate for alum. Trademark rights held forfeited).

Would Pyrex’s change from borosilicate glass to soda-lime glass qualify as “sudden and substantial?”  Perhaps.  However, until an unlicensed manufacturer begins to use a mark confusingly similar to PYREX on glass goods, we are not likely to see the theory of abandonment tested with respect to the PYREX mark.

The SOPA/PIPA Problem: Everything Old is New Again — Or is it?

SOPA and its senatorial sister bill PIPA are officially stalled in Congress, and now that some of the din surrounding these failed bills has quieted, I think it is worthwhile to take a  closer look at their place in the succession of Internet-targeted legislation.

Whether you agree or disagree with the proposed legislation, it is difficult to ignore the amount of media attention received by the bills.  Much of the hullabaloo surrounding these bills is largely the product of propaganda generated by competing interests: Internet service providers like Google, Facebook, and Wikipedia on one side, and content owners, including the entertainment industry, on the other.

Following a build-up fueled by heated Congressional debate over the bills and their eventual condemnation by the White House, the public’s attention was firmly captured when service providers like Wikipedia, Craigslist, and WordPress temporarily disabled all or parts of their websites in protest of the proposed bills.  Google even blacked out its logo and launched an anti-SOPA/anti-PIPA public petition, which accumulated more than 7 million signatures in a single day.

Free speech advocates asserted that the legislation constituted censorship without due process and Republican presidential candidates even took a stand against the bills during a national debate, injecting the issue into prime time.  At the same time, the entertainment industry generated its own buzz by purportedly paying lobbyists $94 million dollars in support of the proposed legislation, a fact that was widely publicized.  Make no mistake: companies with the most at stake fueled much of the hype surrounding the proposed legislation in an attempt to raise public awareness and persuade Congress to act in their best interests.

However, corporate heavy hitters have been vocal about proposed Internet-focused legislation since such legislation first began to emerge in the mid 1990’s, and while the public has been part of the discussion in the past, it may be argued that media penetration and public attention have never risen to the level that has surrounded the SOPA/PIPA debate.  The stakes are certainly higher than they once were, as the value of e-commerce, online advertising, and the like — both legitimate and illegitimate — can be measured in billions of dollars, a climate far different from ten or fifteen years ago.  The mid-1990’s don’t  seem oh-so-long ago, but bear in mind that this was a time when websites still looked something like this:

The value of Internet-related commerce is the product of the value that the public places on the Internet itself, and this is where the greatest change has occurred over the last decade and a half.   As a technology, the Internet is unmatched with respect to the rate at which its application is growing.  Not only has the Internet become a universal, borderless medium for communication, it has also become integrated into the fabric of our daily lives, and increasingly so.  Everything is becoming tied to the Internet — from retail brick and mortar shopping with near field technology to streaming television, to household appliances and vehicles.  From dental offices to DVD players to dating services, the Internet is increasingly ubiquitous.

Much of the attention surrounding SOPA and PIPA is likely attributable to the public’s increasing interest in legislation that affects the way users interact with the Internet.  Otherwise stated, the public is more sensitive to any threat to its Internet, and rightfully so.  Evidence that the SOPA/PIPA kerfuffle is at least partly tied to the public’s increasing reliance on the Internet might be found by stepping back a few years to look at the climate surrounding the enactment of earlier Internet-focused amendments to the U.S. Copyright Act, particularly the 1996 Communications Decency Act and the Digital Millennium Copyright Act.  The CDA and the DMCA were enacted in 1996 and 1998, respectively, during the Internet’s adolescent years.  The players were mostly the same, as Hollywood pushed for the DMCA, service providers (mainly ISP’s) resisted it, and free speech advocates offered their two cents.  Both bills, however, passed without much difficulty, despite a number of controversial provisions, not unlike the provisions found in SOPA/PIPA.  Among other things, the CDA and the DMCA have some legislative elements in common with the proposed SOPA/PIPA legislation in that they: (1) contain provisions that offer immunity to online service providers; and (2) provide judicial shortcuts and/or remedies to content owners, to enable them to seek swift and effective relief from online infringement.

The former element is critical to fostering the growth of the Internet, as the drafters of both the earlier and current legislation recognized: if service providers are held accountable for infringement in which they merely act as a conduit, they would be forced to police their networks beyond reasonable means, impeding the growth and innovation of the Internet.   The CDA contains a Good Samaritan provision in § 230 that insulates ISP’s from liability for torts committed by Internet users and from liability for restricting pornographic content.  Likewise, § 512 of the DMCA offers a series of safe harbor provisions that online service providers  from liability for copyright violations that occur via their services, providing that these providers meet several delineated obligations.  SOPA, too, contains such a provision, in proposed § 104, which provides immunity to service providers for taking action against websites owners, domain name owners, payment providers, and search engines providers that take action prior to the issuance of a court order issued to terminate a site dedicated to the theft of U.S. property.  PIPA’s immunity provisions are arguably less clear, but they are present, at least to the extent of providing service providers for with immunity for acting once a court order has issued.

Controversial judicial shortcuts and expanded remedies are also incorporated into the DMCA, CDA, SOPA, and PIPA.  For example, the DMCA includes takedown notice provisions that require service providers like YouTube and Facebook to remove content immediately in response to a notice of infringement, without first evaluating the claim with respect to fair use or accuracy.  If the provider fails to act as required by the notice, it cannot avail itself of the DMCA’s safe harbor provisions and it becomes subject to copyright liability.

SOPA contains similar notice provisions in proposed § 103, titled “Market-Based System to Protect U.S. Customers and Prevent U.S. Funding of Sites Dedicated to Theft of U.S. Property.” Section 103 is designed to separate the owners of infringing sites from their financial lifeblood by halting their use of payment systems like PayPal and Visa and  stopping the flow of any income from advertisers on those sites.  Under the provisions of this section, content owners who become aware of pirated content on a website would be able to serve notice upon any payment or advertising service provider that is associated with the infringing site.  In turn, the service provider would then terminate its business with the owner(s) of such sites.  Like the DMCA, there is a judicial shortcut here, because no court proceeding is required to validate the notice and the notice itself need contain only a minimum of information to comply with the law.

This seems like a substantial judicial shortcut, but §103 is a bit of a paper tiger, because absent an actual court order, SOPA does not place any threat upon payment or advertising service providers for failing to comply with the notice.  Their participation in the process is entirely voluntary, unless and until a court order is supplied.  Section 103 is not without impact, however; it serves to outline the process that service providers should follow to properly comply with the proposed law and avail themselves of its safe harbor provisions without fear of recourse.  However, with SOPA effectively dead in Congress, its effectiveness may never be tested.

Although the authority of the CDA was substantially abrogated by the Supreme Court in Reno v. ACLU in 1997, it and the DMCA persist, despite challenges.  And without question, the Internet has continued to  flourish despite such legislation. SOPA and PIPA are, in several ways, weaker pieces of legislation than those acts, yet public (and corporate) outcry over the proposed legislation has successfully prevented its enactment.

The demise of the arguably milder SOPA/PIPA legislation is puzzling.  Is it due to an increased public desire to preserve the Internet in its current form? Or is it instead the  product of a shift in power from corporate entities that own content (like the Motion Picture Industry) to corporate entities that serve content (like Google)? Very likely, the answer is both, as companies like Google encourage the integration of the Internet into technology and build new technology around the Internet.  Correspondingly, users come to rely on that integration and the public’s response to anything that might interrupt the status quo is increasingly vocal.

Have we passed beyond a golden era in which Congress can successfully enact Internet-focused legislation through compromise and careful lawmaking? Are SOPA and PIPA examples of the higher bar that legislators must overcome in order to enact such legislation — a bar raised by content hosts and the public itself? Time will tell.

Name-Squatting New Domain Name Registrations

When I registered the domain name cyberlexical.com, I also created a Twitter Account with the handle “cyberlexical,” thinking that I might eventually generate a Twitter feed pointing back to my posts and to other relevant Web content.  I’m certain that a lot of people do this with the same motives in mind, and — as it turns out — it might not be a bad idea.

One blogger recently posted an account of his own experience, suggesting that web crawler bots are monitoring new domain name registrations and acquiring like-named Twitter handles, ostensibly to squat on those handles and monetize them.  Locating new domain name registrations is not particularly difficult, considering the number of websites available online that catalog new registrations almost as they happen.  I saw evidence of this shortly after I registered cyberlexical.com, when I began performing Google searches for “cyberlexical” to monitor its appearance in Google’s search results. Initially, a link to this blog did not appear among the search results, but there were a multitude of results pointing to services cataloging my new registration among a list of other domain names registered during the same time period.  Those bots are efficient:


As an aside, such automated cataloging may be a good reason to elect to use your domain registrar’s domain privacy protection feature when registering a new domain name.  Not only are new domain name registrations being cataloged, so are the registrants’ associated contact information.  If you provide personal information in your domain name registration (as I did) for inclusion in the whois database, that information will also pop up on the radar of those services trolling new domain name registrations.  I discovered this the hard way, as I began receiving phone calls from SEO marketing opportunists a mere two days after creating the registration  — and I’m pretty sure the first guy’s real name was not “Robert,” as he suggested.

In any case, I soon set up the privacy feature, for which there was no additional charge — although many domain registrars do charge for this.  Come to think of it, that makes me wonder: do domain registrars that offer privacy services for a fee also farm out your public contact information to their partners in the SEO business, compelling you to either accept their privacy service (for a fee), or submit to being pestered by SEO marketers who are attempting to get you to use their services (for a fee)?

OK, OK, I’m digressing again.  I do that.

There is certainly potential value in acquiring Twitter names that are synonymous with new domain name registrations.  In a Web environment that is characterized by the significance of attractive user names, domain names, email addresses, and other identifiers, the land rush effect compels content owners to gobble up those identifiers — as much out of fear that they will be hijacked by opportunistic squatters as from the desire to put them to legitimate use.  In regard to Twitter, we have already seen some cases of name-squatting appear in the media, lodged by celebrities (well, pseudo-celebrities) and big companies (fine, pseudo-big companies) seeking to gain control or restrict the use of their names as Twitter handles.  Such cases face difficult uphill battles before tribunals, because they fall outside the scope of legal frameworks like the ACPA and the UDRP, which apply only to second level domain names (e.g., example.com), not third level domain names or resource paths (e.g. domain.com/example).  Twitter accounts, which are expressed as http://www.twitter.com/username, clearly fall into the latter category.

Without the assistance of such targeted legislation, aggrieved content owners who are also U.S. trademark owners appear to have little remaining recourse against name-squatters, unless and until such names are used commercially in the manner anticipated by the Lanham Act.  Even then, trademark owners would also have to prove a likelihood of confusion to succeed on a claim of trademark infringement.  Claims of dilution or tarnishment are probably even further out of reach, except to those plaintiffs’ whose marks are among the über-famous — the only marks that are protected under the dilution provisions of the Act.

This leaves trademark owners mostly at the mercy of Twitter itself, which provides a set of factors that it considers in determining whether conduct constitutes unfair name-squatting:

  • the number of accounts created
  • creating accounts for the purpose of preventing others from using those account names
  • creating accounts for the purpose of selling those accounts
  • using feeds of third-party content to update and maintain accounts under the names of those third parties

While these factors echo some of the requirements of U.S. trademark and anti-cybersquatting legislation, they are also comfortably subjective, giving Twitter a great deal of flexibility in reaching determinations about cybersquatting complaints — typical of the protective manner in which company policies are usually written.

Bringing the discussion full circle, it appears that we have: (1) highly-efficient web bots that automatically track new domain name registrations and then register identical Twitter handles; (2) flabbergasted domain name owners, who find little help in the existing legal frameworks; and (3) a somewhat subjective set of policies provided by Twitter itself to deal with name-squatting disputes.  In effect, it appears that it is very easy to lose the opportunity to use a Twitter handle identical to your domain name unless you acquire it quickly, and much harder to get it back if you do not, as the writer of the aforementioned article discovered.

Most marketing specialists will likely respond to this seeming-conundrum monosyllabically: “duh.”

Conventional marketing wisdom is to create a concept, prepare it for prime time, and quietly acquire any and all related virtual property on the front end — a Twitter account, a Facebook page, etc. — even if you do not intend to ultimately use them.

But all of this feels more than a little out of sync with U.S. trademark law, which is designed to avoid land rushes by providing a sort of limited monopoly over the use of words and symbols to those who actually use them (BIG emphasis on “use”), in association with products or services.  You cannot register a trademark for a word that you might use; you have to demonstrate that you are genuinely using it in association with your products/service, or that you have at least a bona fide intent to use it soon, not merely an inkling that it might be valuable and a desire to reserve it.  In the Web-verse, the approach is the opposite: when a user conceives a new name or phrase that s/he wishes to use as a tool for branding, the user is also compelled to scour the Internet for other unreserved instances of the term, lest s/he be beaten to the punch by clever opportunists.

So why the disparate nature of the Web vs. the “real” world with respect to the use of trademarks? Aside from the consummate reality that the law is sluggish when it comes to  catching up with the rapid evolution of technology, the central difference is the immediacy of the Internet itself.  This is not your father’s marble and granite Patent and Trademark Office: the Internet happens in an instant.  In an analgous comment concerning copyright, one of my favorite quotations reads:

Notions of security and control that may have been exercisable in the non-networked, analog world cannot be effectively transferred to a realm where even a single digital copy can propagate millions of perfect clones, world-wide, almost instantaneously, and where control over the quantity and destiny of the bits that comprise digital media will be imperfect at best.1

The quandary is the same in the name game: it happens quickly and without an effective administrative filter on the front end, so for now (and perhaps for the foreseeable future) it must be policed largely on the back end. The law is imperfect and it requires those who embrace new technology to anticipate and outmaneuver opportunists who seek to monopolize that technology.  Eventually, the legislature may draft new laws that better harmonize our existing legal frameworks with the dynamic nature of the Internet, but the Internet is a rapidly moving target, and new legislation and the accompanying litigation that it generates are costly.  In the meantime, the bots are watching.

1. Letter from Philip S. Corwin, legal counsel for the owner of KaZaA peer-to-peer software, to Senator Joseph R. Biden, Jr. (Feb. 26, 2002).

New Generic Top Level Domains and the Battle for Search Supremacy

There has been a great deal of press lately surrounding ICANN’s planned implementation of a new system that will allow entities to create their own generic top level domain names (gTLDs). If you are already familiar with the details of the new gTLD system, skip down to here, otherwise read on.

The Internet Corporation for Assigned Names and Numbers (ICANN) began working on a proposal for a new gTLD system several years ago that would supplement the existing collection of gTLDs with which we are all familiar., i.e., .com, .net, .org, .info, etc.  Following multiple drafts and several periods of public comment, ICANN’s Board of Directors approved the new plan on June 20, 2011.  This plan opens the Internet naming system to private groups by allowing them to create their own gTLDs, including branded domains (.canon, .IBM), generic domains (.ski, .shop), and cultural brands (.gay, .irish), among others.  Unlike the present system, the acquisition of these new gTLDs will carry substantially greater hurdles to those who wish to obtain them: the cost alone of applying for such domains — $185,000 — will present a barrier to most small entities.

In addition, the new domain system requires applicants to undergo an examination phase, similar to the U.S. trademark application process, through which the applicant’s right to register a given domain name will be reviewed and interested third parties will be provided with the opportunity for protest.  ICANN hopes to minimize the impact upon trademark rights and reduce opportunities for cybersquatting through this process.

Finally, successful gTLD applicants must have the infrastructure to administer a domain name registry, because creating a new gTLD is only the first step: applicants must also be able to manage second level domains tied to their gTLDs.  For example, if Canon, Inc. registers .canon as a TLD, it must also be prepared to administer that domain, that is, to provide registrations for domains like support.canon,  research.canon, etc., to those who seek such registrations, both internally and externally.  And in managing those domains, new gTLD owners will have to create and administer a rights-protection mechanism for third-party trademark holders.

The purpose of ICANN’s plan, ostensibly, is to “promote competition in the domain name market while ensuring Internet security and stability.”  Whether that rather vague directive can be achieved remains to be seen.

The new gTLD application process will open in January 2012, just a few months from now.  Ahead of the impending application period for new gTLDs, there has been no shortage of criticism in the media about ICANN’s plan.  Most of that criticism centers around two perceived issues, namely the enormous cost to brand owners who feel bullied into participating in the new system, and the potential litigation that may accompany the trademark issues surrounding the new domains.

There are certainly points for very interesting discussion in relation to both of those concerns.  From a trademark standpoint, for example, clashes between like-named corporate entities such as Delta Airlines, Delta Dental, and the Delta Faucet Company are assured.  Why settle for .deltafaucets when you might obtain simply .delta? Why choose .dovechocolate when you might be able to obtain .dove, to the chagrin of Dove soap manufacturer Unilever?

Likewise, the enormous cost of applying for and maintaining a registry for a new gTLD will not only increase marketing investments born by companies and their shareholders, it will also effectively push First Amendment speakers off of the domain name platform.  Specifically, the low cost of registering traditional second level domain names for the purpose of critical speech and parody (e.g., http://www.walmartsucks.com and http://www.stopBP.com) has historically placed First Amendment speakers on equal footing with their targets, because such domain names can be acquired easily and inexpensively.  Under the new gTLD system, however, there will be far less opportunity for such speech, as costs are exponentially higher and the burden of administering a domain name registry will be beyond the average Internet user’s capabilities.

Notwithstanding the validity of these concerns, I think the real fly in the ointment is the view from 10,000 feet.  Little attention is being given to the impact that ICANN’s new system will have on the quiet battle taking place between the domain name system and search engine operators like Google, a subject that I discussed at length in my recent article (shameless plug alert!) Fifteen Years of Fame: The Declining Relevance of Domain Names in the Enduring Conflict between Trademark and Free Speech Rights, 11 J. Marshall Rev. Intell. Prop. L. ___ (forthcoming 2011).

Presently, an Internet user who seeks to locate a website can rely upon either type-in search, or a search engine to find that site.  Type-in search is a kind of trial-and-error approach: a user looking for information about the hours of operation of the local Red Lobster may simply guess (correctly) that typing RedLobster.com into a browser address box will reach the desired website.  A less certain user might type “Red Lobster” into a search engine search box and be provided with a list of search results, first among them RedLobster.com.  Users are less likely to experiment with type-in search when the result is less certain.  Is it DeltaAirlines.com or simply Delta.com?  A user may save time by typing “delta” into a search box.

Google not only counts on this trend toward reliance upon search engines, it cultivates it.  For example, the Google Chrome browser features an “omnibox” — a single box that will accept user input consisting of either domain names like delta.com or ordinary language like “delta.”  Presented with the latter, the Chrome browser will take a user to a list of search results related to “delta.”  Likewise, those search results tend to visually de-emphasize the domain name addresses associated with the websites listed in favor of plain language titles and descriptions of the listed sites.  In this manner, users become accustomed to viewing search results and less attached to domain names.

There is no mystery as to why Google does this: revenue. Google inserts itself, albeit briefly, between the user and a desired website like RedLobster.com, in the hope that the user will occasionally be drawn to click on adjacent click-through advertisements, which Google sells to its advertising partners, for enormous profit.  In effect, Google hopes that you tune in and don’t get up during the commercial.

The main argument against Google’s approach to locating a website is one of efficiency: why should a user be presented with more information than desired, in the form of both  organic and paid search results?  If a user knows or can accurately guess the correct domain address, then the extra step implicated by a search engine is extraneous.  However, the argument in favor of using search engines over type-in search is identical: efficiency.  Internet users can quickly become skilled at using Google’s search engine, and Google’s search engine algorithm can actually learn from its users search habits.  The user need only become proficient at predicting how best to satisfy a single entity — Google — rather than guessing how an infinite number of domain registrants will choose to position themselves on the World Wide Web.

In my view, ICANN’s new gTLD plan will create a kind of dilution of the domain name system, not unlike the toll-free area codes that were introduced in the 1990’s to supplement the previously dominant 800 area code.  In permitting the addition of potentially thousands of new gTLDs to the current domain name system, ICANN may actually alienate Internet users and push them more firmly toward reliance upon search engines.  A user confronted with the choice of typing delta.com or deltaairlines.com, presented with the additional possibility of .delta, may simply admit defeat and choose to type “delta” into a search engine box.  In the grand scheme, perhaps this is not an awful development.  Search engines are a bit like phone books peppered with advertisements: they offer a free (to consumers) service that is easy to use at the cost of a few potential distractions.  The alternative is memorization and/or trial and error, which is either impractical or impossible.

However, the likely outcome of this trend will be a whittling away of the power and relevance of the domain name system, and perhaps more significantly, the transition of that power into the hands of private search engine-operating entities like Google and  Microsoft.  While I recognize the potentially frightening implications of that transition, I am less concerned about the mishandling of that power than I am about the investment that ICANN is making and is asking others to make in order to bring its new gTLD plan to fruition.  I believe that it is a plan which will ultimately provide little benefit to the public.

It is hard to say whether the domain name system as a whole is worth salvaging.  In an era of rapidly evolving technology and correspondingly evolving law, we may one day regard the domain name system as an ephemeral tool akin to dialing the operator to ask for an extension.  In the meantime, I believe it would be wise to tread carefully in how we invest in improving that system.

Bit-Squatting 101

An interesting article appeared today in the Tech section of Forbes about an esoteric form of domain name squatting that has been described as “bit-squatting.”

This phenomenon is rooted in the binary nature of communication used by the Domain Name System (DNS): computers relying upon the DNS communicate with one another through a series of 1′ and 0’s.  There are some interesting intellectual property issues here that are worth some discussion, but a little technical overview might be useful first.

Every computer connected to the Internet is assigned a unique numerical identifier known as an internet protocol (IP) address.   IP addresses serve to identify individual computers and they make it possible for computers to locate one another on the internet.  These addresses consist of four numeric strings ranging from 0 to 255, separated by periods, for example,  A portion of each IP address represents the network that the computer utilizes, and the remaining portion identifies the individual server machine where the hosted web content resides.

Because it would be difficult to remember the numeric addresses which computers utilize to locate one another via the Internet, the Domain Name System was developed to associate IP addresses with the more memorable domain name addresses with which we are all familiar.  When an Internet user types a domain name like example.com into the address box of a web browser, a request is sent to a remote domain name server to query the IP address associated with that domain name.  The name server then reports the IP address to the browser and the browser attempts to make a connection to the computer located at that numeric address.  In this sense, a name server acts as a sort of automated phone book for Internet users.  Once the domain name is translated into an IP address and the connection is made, web content stored on the remote computer is sent to the user’s browser and a web page appears. Continue reading

Privacy in Google+

I wrote earlier this month about user privacy in the new Google+ social media network, vis-à-vis the privacy approach employed by its primary competitor, Facebook.  Today I spotted a useful guide to privacy settings in Google+ (and in Google’s other products) which illustrates the more-transparent approach utilized by Google in allowing users to keep track of who views their data.

The ten minute video, produced by a third party, is well done and worth a look if you are interested in privacy.

Google+: Not a Dinosaur

I have been playing with the new Google+ social networking service for a few days now.  I would like to say that I was granted an invite during this limited field test because Google recognized my technological savoir-faire, but in truth I am just fortunate to have a few friends who work at the Big G and one of them was kind enough to send me an invite.

I like Google+, although that doesn’t come as much of a surprise; I like the vast majority of Google’s product designs — largely for their elegant simplicity — and I like Google’s effective but unobtrusive approach to generating revenue from its online products.  Others will certainly disagree, but Google’s online presence reminds me of the early days of the World Wide Web, when developers who were simply enamored by the ability to connect with others were sharing content on the Web without concerns about monetizing every megabyte.

Make no mistake: Google is a business and a very profitable one.  Google supposedly makes more money from advertising than all U.S. newspapers — combined — and any company that can afford to buy its own Tyrannosaurus Rex is doing just fine in my book.1  But Google’s portfolio of products and services for everyday Internet users is ever-expanding, accessible, and provided at no direct cost to those users.  Yes, there are arguments about the value of user information and the cost to Google’s users in terms of privacy, but this is one area where I feel Google has done it right, and it is well-illustrated by the comparison between Google’s new social media network and its largest competitor, Facebook.

My primary qualm with Facebook has always been the platform’s seemingly heavy-handed approach to user privacy.  When Facebook creates new opportunities for third parties to utilize its users’ information, the default approach is to set user preferences to “allow,” unless and until users discover the new setting and disallow it.  No doubt the company markets the effectiveness of this approach to its partners and generates substantial revenue by doing so.  And of course, by finding new outlets to open user data to others, Facebook’s viral networking effect propagates.  I find myself constantly policing my Facebook privacy settings to close down new breaches silently added by Facebook.  It just feels creepy.

Does Google leverage its users’ information? Of course. Google is scanning my search queries, Google is scanning my Gmail, and Google is no doubt scanning my posts in Google+, all with an eye to delivering relevant click-through advertisements that will appeal to me.  But to me, at least, Google’s approach doesn’t feel as obtrusive or clandestine. Some would probably argue that this simply means that Google is better at massaging its users into a sense of security, and while that is almost certainly part of the model, I believe that Google’s “do no evil” mantra is based in reality as much as in marketing. Google sells advertising to its partners and it pairs that advertising with my perceived interests, but there is a disconnect in that pairing: unlike Facebook, Google isn’t simply feeding my demographic data to its partners to use as they see fit.  It’s the lesser of two evils, in my mind, in exchange for free access to the Google products that I actually like to use.

So what else do I like about Google+? The “circles,” for one.  Google allows me to categorize my connections into groups known as “circles,” including “friends,” “family,” “acquaintances,” and people who I don’t know personally, but whose posts I like to follow (“following”).  Whenever I post content — comments, photos, my own biographical data, etc. — I can decide which circles get access to it.  This goes against the grain of viral marketing that Facebook has employed, because it trims down the distribution of users’ content to others, limiting opportunities for growth of the platform through connections.  But Google, no doubt, recognizes that its product will expand without employing that approach and it knows that users like me will appreciate the privacy that its circles provide.

In addition to circles, I like the simple interface inherent in Google+.  While I am certain that the GUI will evolve in time as the platform develops, I enjoy the streamlined and inconspicuous interface which is characteristic of all of Google’s products.  Even the Google+ mobile Android app, which I am using on my smart phone, has a clean feel to it and it simply works, even in beta.

An interesting spin on all of this is the potential downside (for Google) of one of my other favorite aspects of Google+: it is highly integrated with Google’s products and services.  In an age where my data is increasingly spread around on the Internet, I like the fact that Google+ simply ties itself to the other Google products that I already use, and I can tighten down the number of online products that I utilize by keeping them in one family. Google+ integrates my Google Profile, my contacts, my Google Talk chat interface, my Picasa photos, etc.  Many have voiced concerns about the Big Brother effect inherent in consolidating all of one’s online interactions in one company’s products, and there are certainly some valid arguments there, even after wading through the wilder conspiracy theories.  However, I think the larger concern for Google should be the risk of antitrust persecution stemming from its integrative approach to its products.

Greatness is a double-edged sword: a company may become successful when it accomplishes a multitude of tasks for its customers, especially if it accomplishes them incredibly well through innovation and business acumen.  When that kind of efficiency includes tying products together in a manner that obviates the need for consumers to rely upon other companies situated in the same industry, there are inevitably antitrust concerns.  Google has been active in integrating its products, a fact that has not escaped the attention of the FTC, which launched an antitrust probe into Google’s activities last month.  That inquiry, by the way, has not yet lead to a subpoena, but it will be interesting to see if one of the things that I love best about Google is also the thing that leads to its next major legal speed bump.

Google can and almost certainly has taken some notes from Microsoft’s tiff with the DOJ more than a decade ago, which was characterized by Microsoft’s heavy-handed attempts to tie its Internet Explorer browser to its other products, in order to give its browser a competitive edge over the competing, well-established, and now notably-defunct Netscape browser. Once Google+ is opened to the public, the effect of its integration with other Google products should become more evident.   If that effect ultimately spells the demise of Facebook, there will be more fuel for an antitrust fire. I, for one, would hate to see that.


1. OK, OK, purchasing a dinosaur is not exactly a hard indicator of corporate solvency, but I had to work that fun fact in here somewhere.

New Publication & Some Thoughts on ExpressO

I’m very pleased to write that my latest article has been accepted for publication by several excellent law journals.  This was my first foray into the use of ExpressO to submit my work to multiple journals online and I must say, it was much easier than the old way.

Pros: submit to multiple journals simultaneously for a reasonable fee; easily search for and target journals that are a good match for your publication subject matter; utilize the service to alert journals once you have an offer to publish and request expedited review of your article, all electronically

Cons: it is always difficult to gauge the interest that an article might receive and the shotgun approach with ExpressO is almost too easy.  I received an offer almost immediately from one journal, which provided me with a very short period to accept or decline.  My ultimate decision to let that offer pass before I received offers from other journals caused me no small amount of distress.

On the balance: ExpressO is a great service and had it been available to me when I was a law student, I might have been more proactive about submitting my work for publication.  I still believe that some of the articles that I wrote as a student were among my best work and I regret that I didn’t pursue publication more actively.

Among my offers to publish, I finally accepted the one made by the John Marshall Review of Intellectual Property Law.  The decision to choose between that journal and several others was a difficult one, so I relied upon the advice of some of my former law school professors.  I also reviewed the Law Journals: Submissions and Ranking database generated by Jack Bissett at Washington and Lee University School of Law.  The database offers a sort of ranking of law journal prestige, based upon several factors, including how often articles in each journal are cited by others.  It is fairly user friendly, once you figure out how to navigate the syntax.

My article, Fifteen Years of Fame: The Declining Relevance of Domain Names in the Enduring Conflict between Trademark and Free Speech Rights will be published in RIPL this fall.  In the meantime, I will post a working draft on my Selected Articles page.