Duke of URL
From The Independent:
Tim Berners-Lee, the publicity-shy physicist who invented the world wide web, has been awarded a knighthood.
An unsung hero of the modern age, Mr Berners-Lee is named in today's New Year's Honours List for "services to the internet" - creating the system that has revolutionised computer use across the globe.
While I have nothing but contempt for monarchy and all its trappings (yeah yeah, I know contemporary knighthoods are different), Berners-Lee deserves every award he gets and then some.
Attention Kinks fans: Ray Davies is picking up a CBE. More interesting, though, are the folks who turned down such honors, including Davids Bowie and Hockney and Nigella Lawson.
Whole story here.
[Link via Jeff Jarvis' BuzzMachine]
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Somehow, if I were going to design a computer system I don't think I'd have academic hacks or college punks do it. And, I'd certainly never give TBL or Andreesen a knighthood.
Every time you hear double-yoo double-yoo double-yoo dot long-name dot com, thank TBL.
Every time you change a popup menu on a web page and a new web page has to be loaded to update the other information on that page, thank TBL.
Every time you have an unknown problem with your HTML code (or your stylesheet or your XML or your whatever the fsck text-based crap), thank TBL.
Every time you need to a) know what a cookie is, and b) accept or reject or remove them, thank TBL.
Certainly, a large part of the problems with the web are due to Netscape rather than TBL. However, he started it all. Maybe if the web had been competently designed it wouldn't have been as popular, who knows.
Here's my suggestion I wrote a couple of years ago. Of course it would need lots of work, and it would never succeed, and I am not the king of OOD, but I think if the web had been designed by someone else (for instance the guy who designed OpenDoc) it would be closer to that proposal than what TBL and Netscape did.
I don't blame TBL for having created HTML as he did--it made sense and built on existing technology. I do blame him for apparently being very involved in creating the kind of user-hostile culture that pervades the W3C and comes up with stupid angels-dancing-on-the-head-of-a-pin concepts like the CSS box model. Weird XHTML and CSS puritanism based off of obscure theoretical requirements that affect maybe 3% of the users become the priority, and making it "easy to use" for the people who work with this stuff day in and day out has become an afterthought, if it's thought of at all.
Netscape, for better or ill, responded to author demands and made things simple to implement. But with the W3C? Try wrapping your brain around 100% height NOT being the height of your current window or even the final rendered document (as width is), but the height of your window when the document was first rendered, not to be resized ever again.
I'm tired of filing bugs against Mozilla only to have the developers mark it invalid and say "it's stupid, but that's the way the spec is written."
The whole internet thing was my idea. One night Tim Berners-Lee picked my brain big time. I should be "Sir Al" now. Oh well; Bill just kept using it for porn anyway.
Al Gore
Text-based systems have very serious limitations. For something which is primarily text-based like Usenet, text makes some sense.
However, what I'm talking about is a quite different concept than anything remotely text-based. I'm talking about "real" programming, with objects talking to other objects. I doubt whether people like TBL or Andressen are capable of getting what I'm talking about. However, anyone who's even done something like write a simple MFC app, not to mention those who've done Java development, should understand that "real" programming beats abhorrant kludges like HTML, XML, CSS, XHTML, BLAHBLAHBLAH any day.
How would you, for instance, programmatically access Yahoo's directory? Write a screen scraper, just like you would if you were working with a 1960-era mainframe. Couldn't TBL or someone else have taken a look ahead a few years and realized that people might want to programmatically access these web sites?
Correction: Robert Cailliau and Jean-Fran?ois Groff co-invented the WWW with Tim Berners-Lee.
To be fair to TBL, XML is designed for just such applications. Sure, it's text-based, but more importantly, it's an open standard, fairly readily readable by our digital bretheren. Ideally, you access the same document store with XML +CSS (my prefrence) or transform the XML with XSLT and render it as necessary (booo, extra processing on the server side).
For me, a nice Java interface would be simple (look to Rebol and CURL for nice ways to get at such stuff), but I deal in content management, and there's no way in hell I could train a Reason editor to do that sort of stuff, and to date, GUI editors are just too heavyweight and unweildy. Most people only need line breaks, bold, italic, and maybe inputting a graphic. That's it.
Yes, one of the big flaws of the W3C is trying to make XHTML into a discoverable XML schema, which is dumb. If you need XML, use XML and transform it somehow (I prefer stylesheets). If you don't, then use HTML, or XHTML if you just want to generalize your parsers a bit.
Also, the RDF crowd will bend your ear forever about how they have the perfect answer to discoverability and start blabbing about Dublin Core, but I quickly find they go into theory land as fast as TBL.
The bottom line is that the genius of the original HTML spec was something that any idiot could learn well enough to author a paper in a simple text editor without the need for replicating or standardizing the Word document format. Unfortunately, it's gone in a bizzarrely different direction by steadfastly trying to put a saddle on a cow and ride into the sunset.
Under my (Quixotic) scheme, one of the simplest web pages would be one containing nothing but a text area object. That text area object would be populated with a run of text in some rich text format, like .rtf. All the user would have to do is open up a wordpad-like app, type in the text, use buttons to add bolding, URLs, etc., and then publish that file to the web.
A more complicated web page would contain, say, a text area, an ad banner, a button banner, and a sidebar (for instance, like that at http://www.reason.com/rb/rb123103.shtml). All the author would have to do is enter their text into this wordpad-like app, and it would be placed into a pre-formatted template. Each of the parts of the web page would be an actual object, like a java object.
The text area could be sized and have its other attributes set. For instance, one could say that the text area will take up all of the page between the sidebar and the right margin and between the banner at the top and the bottom of the page.
Most of these classes would already exist on the client. For instance, a standard banner could be supplied by the browser. When reason.com first loads, the banner would be loaded with the reason graphic and home, about, search, subscribe, and advertise buttons, along with the commands to execute when the buttons are clicked.
When the user clicks on a link, logic in the "web page" could decide to just change the text area, leaving the rest of the page alone. The text area could be told something like "when clicked, load this page." Or, it could be much more complicated using custom code. Similarly, user preferences could be stored would would act like a style sheet: bold could be shown as purple, etc. However, unlike a style sheet, each web page would be smart. The smarts of a web page wouldn't have to be used to get something workable, but they would be there for people who want to expend the time and money.
Any data formats involved in this plan would be standardized, and basic software (like the wordpad-like program) would be public domain. And, because the formats would be both binary and precisely specified, there wouldn't be any misunderstandings over whether something is good or not.
I guess Ms. Lawson really does bite, then.
I'm confused. I thought the Pentagon (DARPA) invented the internet.
Jean Bart,
No DARPA actually did create the Internet. By that I mean they designed the basic protocols (TCP/IP) and infrastructure that transports data in many forms around the world. The WWW is just one interface to information accessible on the Internet, albeit the most popular one, but you have others like FTP, Usenet, Telnet, IRC, Email, etc. Heck I wonder if there are any gopher servers alive anymore?
linden,
DARPA invented what we think of today as e-mail (and perhaps USENET?). The WWW, meaing the areas of the internet that we surf with browsers, was created by others.
Lonewacko: now I understand the 'lone' part of your alias a bit better.
Damn, and I've been blaming Al Gore all along.
Actually, you could blame Jon Postel for a lot of what's wrong, Berners-Lee was just building on what was there. But it's all foolish 20-20 hindsight.
Right now, I blame Microsoft and AOL, who have the utter power to force fixes, but are instead following in Netscape's footsteps and making things worse for their own short-term gain.
Usenet was invented by two students from IIRC Duke or UNC. History page here: http://www.vrx.net/usenet/history/
What we think of email was invented by someone else I believe. I believe as stated above DARPA just came up with the underlying protocols that move things around. Look at the early RFCs for more information.
"Lonewacko: now I understand the 'lone' part of your alias a bit better."
Good one. Except, there were alternatives to the WWW. For instance, AOL, that Mac-friendly thing (FirstNet or something), etc. And, there have been distributed object systems for a long time. And, there was OpenDoc, Taligent, and similar. All of those were available before the WWW was invented or became popular.
Just to pull a name out of a hat, if Kurt Piersol (http://en.wikipedia.org/wiki/OpenDoc) had designed the WWW, I think it would have turned out much differently. It would certainly be a lot easier and cleaner to program for, and there would be a lot less BS like that I mentioned above. On the other hand, perhaps something that made the WWW popular was that it was designed by someone who's apparently as dumb as most of the WWW users. And, if the WWW had come from one company, or even a consortium, it might have failed due to politics.
Steve,
I see; thankyou. 🙂
The French Minitel was the first major public interactive network; it is still alive. It is interesting how robust a system to replace phone books has become.
Article on minitel: http://news.bbc.co.uk/1/hi/business/3012769.stm
Seems to me that Lonewacko's proposal is basically to turn the browser into a some sort of thin RMI/IIOP client and serialise every other damn thing, every object (content, state, controls etc) into a server side repository.
Not only are you are talking some major refactoring here, you also have to wonder about performance issues. For reference, just look at the performance nightmares one can have with entity EJB's & its endlessly evolving spec/features.
"Any data formats involved in this plan would be standardized,"
That's pretty key to this scheme. I didn't see any protocol or specs on the site.
Interesting stuff, anyway.
There are indeed still gopher servers, plenty of them.
Lonewacko's quest is indeed Quixotic. The net was designed to allow various server-based 'standardized' systems (with widely varying standards) to communicate. Standardized server-side applications are exactly what Microsoft wants. With everyone using their standards. No thanks.
There are no specs on the site because this is just something I wrote up and I haven't done anything about it since. I don't really intend to do something about it either, as it would be no use, the WWW is too firmly entrenched.
In response to Kathy's comments, yes, it's true, I am secretly a Microsoft disrupter placed here under Bill Gates' direct command. Our goal is to use reason magazine as a launching point to take over the entire WWW.
I'm familiar with performance issues. In its most basic form*, there would be little difference between my scheme and the current scheme.
* A "web page" consisting of a standard container object that contains a standard RTF object. The classes for the standard container and the standard RTF viewer would be provided by the browser; if yahoo needed a special container or viewer, it would be downloaded just once and used until not needed/deleted from the cache, etc. All that would need to be sent would be some sort of resource (or serialized data) for the container and the RTF viewer. So, it wouldn't be much larger - and might be much smaller - than the current scheme. Given a request for a specific "web page," something like a servlet could return the data, or it could just be static like a regular web page. The client could take action with the web page, using large print if the user has selected that preference, etc.
(I was just joking about the part of the previous comment about MS. I am not a paid MS Fudster.
Lonewacko: The WWW succeeded because it was: 1) in the right place at the right time; 2) Adequately fulfilled the task at hand; 3) Built on the already familiar SGML; and 4) Wasn't encumbered by 29,837,498,734 stupid patents and trade secret laws.
Aw what the hell, the WWW has definitely helped raise the employment rate in India, so what's the problem.
>> The WWW succeeded because it... Adequately
>> fulfilled the task at hand
>
> That's obviously false. Just look at all the
> "enhancements" and "works of genius" perpetrated
> by Netscape and others.
None of which had to do with sharing physics research data. I assume you realize that this was the "task at hand" of which I spoke?
>> ... built on the already familiar SGML
>
> To whom, librarians?
No, librarians didn't know SGML, and I doubt very many do today either. Physics researchers, on the other hand, often used it to write papers. They were the target audience.
>> ... wasn't encumbered by 29,837,498,734 stupid
>> patents and trade secret laws.
>
> Like the complaints about text vs. binary,
> that's a strawman. There's no law saying that a
> better design had to be so encumbered.
No such design was forthcoming. If someone had come up with something more open and more extensible, it would have been the accepted standard. But TBL got there first.
"The WWW succeeded because it... Adequately fulfilled the task at hand"
That's obviously false. Just look at all the "enhancements" and "works of genius" perpetrated by Netscape and others. For instance, the DOM. If the WWW had been designed by someone competent, something similar to the DOM would have been the starting point, not an afterthought. Then, look at all of the products created to make sense out of the web and development for it. Servlets, EJB, CGI, PHP, the list just goes on and on, and all it boils down to is sending HTML to browsers.
"... built on the already familiar SGML"
To whom, librarians?
"... wasn't encumbered by 29,837,498,734 stupid patents and trade secret laws."
Like the complaints about text vs. binary, that's a strawman. There's no law saying that a better design had to be so encumbered.
It's odd to hear the carping today about how cumbersome the web is -- all communicated over the web, of course, from hand-helds, laptops on wi-fi, old PC's in basements, cyber-cafes in remote locations of the globe ... across time, geography and international borders. I was a user of the precursor of the internet through government labs 25 years ago and it 'met the need at hand' by allowing us to (primarily) transfer files and run programs on other computers remotely. Storage space and CPU speed were precious and expensive so distributing your work to multiple computers and making it available to both yourself and co-workers (often geographically separated) were key objectives. Tidy URL's weren't -- we wanted all the specifics so we could be sure what we were connecting to. The gripes that it is cumbersome for today's uses are certainly valid -- today's generation should take it a step further for the new tasks at hand. I remember being totally thrilled that I could run a computer in another state, transfer files, engage in 'instant messaging,' 'finger' a user to see if they were on-line -- all from my computer in the basement of a government building. I wasn't worried that the location codes were messy -- to me, that would have been the same as complaining about the color of paint on a new car after having lived with a bicycle all my life.
EMAIL: master-x@canada.com
IP: 82.146.43.155
URL: http://www.penis-pill-enlargement.com
DATE: 02/28/2004 06:02:33
When prosperity comes, do not use all of it.
EMAIL: nospam@nospampreteen-sex.info
IP: 193.251.169.169
URL: http://preteen-sex.info
DATE: 05/20/2004 01:19:51
Against boredom even the gods contend in vain.