A message to clueless website authors

Are you a brain-dead website author?  You probably are, if you've been “directed” to read this page.  You probably had one of those offensive and ignorant, “upgrade your browser” demand notices on your website, too.  When what's really needed, is for the website to be upgraded.  Stop being a nuisance, and learn how to write websites properly!

For anybody else who's stumbled across this page, this isn't going to be a lesson in how to write websites, there's plenty of material on that (there's links to some near the end of this page, and a primer on website authoring elsewhere on this website—an introduction to the concepts of proper authoring).  This page has been written, over a few years, as I've encountered website after website which was badly written; with new sections added as I've discovered yet more annoying and stupid sites.  There's a bit of information repeated in sections, to reinforce the point being made, and avoiding having to scroll back and forth between sections.

Any true “webmaster” should already know about all of this (everything that's mentioned on this page).  If they don't, then they've no justifiable claim to the title (no trade, profession, discipline, etc., allows anybody to call themselves a master of something when they're just an unskilled/inexperienced amateur).  A real webmaster knows how to use the technology available to them properly (and actually uses all of it properly), understands that the data is going to be used in a myriad of ways that they can't predict, knows that they can support this by doing things properly, knows that they break all sorts of things by doing it wrong, and really understands when (and how) they can safely bend rules (though few claiming to be webmasters manage this).

Periodically, I research what locations on the WWW have referenced this page (and those that preceded it), and I've noticed a few common points that have been raised:

NB:  Where this page uses terminology like must, requires, mandatory, may, etc., the proper meaning of such words is intended (e.g. you must do something that you “must” do, it's not optional).  I don't know why people find that so hard to understand that, that it needs explicitly pointing out to them (it's probably the same reason why so many people break speed limits—thinking that, for some strange reason, explicitly defined rules are malleable guidelines).  Take what this page, and any authoritative reference material, says at its word—don't interpret things into other meanings.  This ignorance has even culminated into a need to specially publish a document to clarify the meaning of such words (RFC2119), even though their meaning is not open to interpretation within the English language, and not even difficult to understand.  If anyone actually needs that document to tell them to take those words at their face value, then they really are stupid.

The key points on this page:

This website requires…

If there's one thing that I'm really sick of, it's being told that my browser isn't capable of using some website.  Especially as it usually is!  That, and being told that I must re-configure my system (usually, to degrade my security).  This sort of thing is a waste of my time, it's damn annoying, and just shows how stupid the website author is.  The internet is not for one person's self indulgence, it's for many people to use.  Why else would you publicly publish a document, if you don't want as many people as possible to be able to read it?  If you know what you're doing, then you write so that as many people as possible can read your pages, and without making them have to go out of their way to manage it.

There's a ridiculously large number of moronic website authors who build sites that will not work in more than one or two browsers, because they've used some non-standard authoring style.  If you attempt to use their site with another browser, it will fail in various stupid ways.  Guess what?  The fault is not in the browser being used, it's in the brain of the website author:  They're a bloody idiot!  They've written a site for specific browsers, and that's all it's probably going to look okay in.  When newer versions of browsers come out, including newer versions of the same browser that they've tested the site with, they'll have to rewrite their site.

And there's just as many incompetent ones who assume that their website will only work in one or two browsers, check to see whether you're running one of them; and then blatantly lock you out, if you're not.  If you hack your way around their stupidity and gain access, you'll often find that the site does work quite okay, or with only minimal problems (e.g. the site is usable, even if it's a mess).

Then there's sites which just won't work in any browser that I try, including the latest ones, with all the favourite plug-ins installed, and on a ridiculously over-powered (for web browsing) computer.  They're seriously broken, and only work as expected in broken browsers (i.e. the author's broken browser), or very badly configured browsers.  I'm not about to reconfigure my system to allow your website to work (in a manner that also allows every exploitative website to infect the machine with crap), and a huge proportion of your visitors won't even know how to modify their browser to suit.  Protective software is becoming more widely used, now, even at home.  So along with sites that simply don't work, sites also get actively disabled by protective software, with the end result that your poorly created website loses its audience, all of which is your fault.

The solution is simple:  Write the website properly!  Don't misuse HTML.  Do not tell the visitor to download some update to their web browser, reconfigure their browser, or to use a completely different browser than what they want to.  Write the site to the standard specifications so that it works in all browsers that are capable of working properly (i.e. most of them), and don't do dumb things!  Don't do stupid browser version testing, just completely avoid doing silly tricks that only work in some browsers, and let the browser ignore things that it can't deal with.  Only needing one version of a website for all browsers also means less authoring work.

e.g.Blockquote” is for quoting blocks of text, not for indenting.  There's no specification that says that user-agents “must” indent blockquoted text, so don't expect that behaviour (e.g. one of my browsers italicises the quoted text, it doesn't indent it).  The blockquote element only means that the contents are “quoted” material (how that's indicated, is not specified), “tables” are for presenting tabular data, “em” emphasised text means that the text is “emphasised,” not “italicised,” (italicising's just a commonly employed, but by no means mandatory, method of indicating emphasised text), and so on.  HTML elements have specific purposes, use them appropriately.

I'm not about to spend ages downloading a different browser, install it, reboot, and then come back to look at your site; neither are many other people.  Your suggested browser may not even be available for our systems, and/or we may not be allowed to make changes to the computer we're using.  Likewise, regarding plug-ins to our current browser.  And we're certainly not about to change the device we're using to access the internet, just to use your website.  More non-computer appliances are being used for the internet, different computer systems are being more widely used, and browsers are getting less tolerant of bad authoring, so idiot website authors had better get it through their thick heads to stop kludging the content for specific browsers.

(“Kludging” the content, means writing deliberately broken content, to suit the faults of something else.  This kludged content won't suit the requirements of other things.)

We're going to ignore your website, as well we should, permanently.  That means no sales, or whatever else that you're trying to use your website for (the ultimate stupidity is computer sales websites, that require the latest version of IE to view them, when they're trying to sell new computers to people using older computers).  We'll be away within mere seconds, to find some other site that works, now; in the browser that we're using.  It's not just because we're outraged by a dumb site.  But why would we persist in trying to get some broken site to work, when it's just a moment's work to find something else that does work?

There are more web browsers than Internet Explorer or Netscape, and more operating systems than Windows and Macintosh.  And, many of those others are also quite capable of using SSL, CSS, JavaScript, Java, etc.  And, very often, are far more secure (safer to use) than IE or Netscape.

If some site excludes other browsers for being potential security risks, then they should permanently block the usage of Internet Explorer, at the very least.  It's riddled with serious security flaws, and probably always will be.  Disregarding all the programming bugs, and opportunities it provides for hacking; its ability to keep website log-on details stored within the system is against the usual terms of services for using things like bankcards (it's the same situation as writing your PIN on the card).  Banks should not be advocating that their clients use Internet Explorer, nor advising people to degrade their security so the site will work, but should be actively advising against it, and authoring their sites better.

Relying on Java or JavaScript, is stupid.  They're not used equally across different browsers.  They're often disabled, because users are sick of the stupid uses of them, and how they make their system grind to a snail's pace, or their use has been prohibited on various computer networks.  And both have been perverted into non-standard variations by Microsoft.  Likewise, requiring the use of other plug-ins, such as Flash, Shockwave, and PDF, or non-standard and unsafe browser extensions like ActiveX, is equally stupid; and excludes your site from a large number of people.  That number could be in the millions!

Also, relying on cookies, referrer headers, and user-agent headers to control access to your site, is incredibly stupid.  Likewise, with relying on client-side scripting to validate data before submission.  They're all so easily modified.  Relying on them leaves you wide open to abuse, and prone to errors.  The referrer header's not even a mandatory header, anyway.

Other stupid tricks:

Forcing people to put up with something is being an ass

If users have decided that they don't want, animated banners, Java, scrolling messages, scripts, pop-ups, advertising, graphics, or anything else, then they don't want them!  It's not up to you to try and circumvent their preferences.  They didn't want any of that stuff, including yours.  Take the hint that users don't want crap inflicted upon them.

When told to quit doing something, the message is that you should stop doing it.  When you find that people want to ignore certain things, then get used to the fact that they're not interested.  It's not some form of challenge to find yet another way to do something that people don't want.  Being involved in any sort of activity, is not about doing what you can get away with, but doing the right thing in the first place, because it's the right thing to do.  Exploiting some flaw, that lets you do something that you couldn't otherwise do, is acting in a completely reprehensible manner; “caveat emptor” is the attitude of a fraudulent con artist, and not tolerable.

Inappropriate alt text attributes are worse than useless

Not properly using alt text means that pages become nonsensical when images aren't showing, for any reason (maybe because the server isn't working right, or the page was authored badly and got the links to the images wrong, or because the user doesn't want to slow down their browsing for the images, because the user cannot see, etc.).

The alt text attribute is for conveying a suitable “alternative” for when the images aren't showing.  For instance, if you're using an image in a link to the homepage, then the alt text should make it clearly apparent that it's a link to the homepage unless the link already includes text which makes it plainly apparent.  If you've included a photo on the page that's not just superfluous and ignorable, then its alt text should be a suitable alternative to seeing the image.  If you're using an image which is unimportant to reading the page (e.g. decorative borders around something), then set the alt attribute to nothing (e.g. alt="").

Setting it to say something inappropriate, like “image”, is a nuisance.  It conveys no meaning to what's being read, it's useless information, looks stupid, and frequently interrupts the rest of what would have been a coherently readable section.

If you want some text to appear when the user hovers a mouse over the image, then it's the “title” attribute that you should be using.  Regardless of the behaviour of some browsers, regardless of rubbish perpetuated by some people who don't know what they're talking about, that is the purpose of the “title” attribute, not the “alt” attribute.

Huge image files on websites are not user-friendly

A serious annoyance with websites are ones that have unnecessary huge graphics or multimedia files.  Apart from costing more money in data traffic expenses, they're slow to use.  Slow enough that they try the patience of visitors.  Not everyone has a fast internet connection, and not all routes between services are fast, either.

Complex arrays of graphics aren't a good idea

Contrary to modern myth, carving up a picture into sections does not make it quicker to load.  It can cause problems with running more connections to a server than would otherwise be necessary.  There's a limit to how many connections a server can, or will, accept.  Both to a particular visitor, and the overall number of connections it can service to all visitors.  Likewise, browsers can only make so-many simultaneous connections.  Exceeding the limit results in stalled or aborted downloads.  A visitor is likely to try reloading the page, attempting to get the rest of the graphics, only to face the same problem again.

Contrary to another myth, sliced images do not stop people from copying images from your site.  It makes it slightly harder, but it's not difficult to work around.  Neither do other tricks prevent people copying images (e.g. disabling the right-click menu only works on some browsers, and even that can be easily worked around).

Pages with large numbers of graphics are just as bad.  They take ages to load, and suffer the same problems due to the maximum number of concurrent connections.

Likewise with pages that use images to force the page to format in a certain way.  Without the images, the page looks a mess.  And sometimes with them too, as the author is unskilled at doing it in ways compatible with all browsers and differing browser canvas sizes.  People actually do browse with images turned off; it's quicker that way, and it avoids some of the security flaws present in some browsers.  If site looks stupid, that way, it's because the author doesn't understand what HTML is about, not that the browser is browsing it in the wrong way.

Improper use of image elements make things worse.  ALT text is (now) a mandatory field, but it has to be used intelligently.  It's to be seen when the images are not showing (ALT is short for “alternative”, as in an alternative to the image; to convey the same meaning as the image, not to “describe” it—there are other ways to do that properly).  If you want tooltips to pop up when you hover a mouse over an image, use the TITLE attribute.  Set the ALT text to show something meaningful when it's required (e.g. if you used a graphic for a link, then use words that convey the same meaning as the image), and set it to nothing when the image is best totally ignored (e.g. set ALT="" for images that just pad out the page, or are completely redundant in a text-only viewing mode).

Ill-considered flashy multimedia sites are stupid

Using multimedia content, just for the sake of it, is not a good idea, at all.  It's slower for visitors to use, that's if they even can use it.

Flash is typically included in websites using malformed HTML, isn't available for all platforms, and can be a complete pain to install (a prolonged download from a slow and convoluted website, and may also require a reboot).  It precludes text-only browsing (speedier browsing of sites, accessibility for the disabled, and search engine indexing of sites).  And sitting through a four minute download, only to find that you've been waiting for a completely unnecessary introductory animation, leaves you with very little respect for the page author.

Background sounds and music are damn annoying.  It delays the loading of pages, is distracting, and is also typically included using malformed HTML.

Likewise, animated GIFs are a pain to the eyes, and lots of them can be a serious drag on CPU resources.

These sort of things show a fundamental contempt for anybody trying to read your pages.  It's the cyber equivalent of trying to read a book with screaming kids running around you in circles.

Daft colour schemes are a pain

Picking certain combinations of colours (foreground against background), can make things very hard to read.  Even simple black on white (and vice versa), is harsh on the eyes.  And if you do specify colours on a page, be sure to specify the colours for background, foreground (text), and the links (i.e. set everything, not just some things).  Else, some user's default setting is likely to clash with your non-default choices.  Don't forget that many people will have customised their browsers (not just a few people, it could well be millions of people—the world is a pretty big place), so don't colour pages just for the sake of it, many people will already have made their browsers look better than the original default settings; and not all browsers have black text, white background, blue unvisited links, and purple visited links, as their default colours, anyway.

Other points to remember are that there's an awful lot of people with eyesight problems; colour-blindness is very prevalent (some statistics say one in ten people have it).  And, many people have badly set up monitors (using very dark colours, may make things disappear from view).

Playing with fonts can make a page hard to read

Setting a page to use particular fonts, or font sizing, can make a page difficult, or impossible to read.  Anybody who customises their browser, or has a browser which comes with sensible defaults, will have settings that make it easy to read a page.  The font size will be a normal size, that's comfortable to read; likewise with the font face.

Choosing something else, means that you're making an assumption that they have the font that you've picked (they may not, and their browser may substitute something awful; or if you've tried to highlight sections in a different font, that highlighting will fail), and that the size is suitable for them (you may have picked a tiny or huge font, particularly when you consider that their screen resolution may be different from your own).

The first common mistake is to think that you need to specify a font.  You don't.  At all!

The next one is that you have a better idea of what looks good, than they do.  You're wrong, and you have no way of telling what's actually “suitable” (for them).

Leading to the next common mistake, that you think that most people have their fonts set up too big, and that you should “correct” that for them, by making your fonts one size (too) small (for many people, that means a font that's significantly too small to be easily read).  You're wrong; my fonts are set up exactly how I want them, and many people's are set up exactly how they “need” them.  Now, people have to re-adjust their browser, to compensate for your page.  Then, the next time that they encounter a website that's written properly, it looks too big, so they have to re-adjust their browser, yet again.  This mess is all your fault.

(On nearly every browser that I've looked at, the default font size has been appropriate.  The user hasn't needed to fiddle with settings, until some author has used awkward font sizes.)

Another common mistake is to specify “fixed” font sizes, which not only means that you've probably picked an inappropriate size for them, but you've also made it just about impossible for them to adjust the display of the page to suit themselves.  Attempting to fix the size of fonts, to fit text into certain parts of the page (e.g. because you've got images of a certain size, surrounding it), is a mistake.  The entire idea of that (fixed page layout), is flawed.  It's the complete opposite of the design philosophy of HTML.

(Yes, this page does vaguely specify font types, but only in as much as “suggesting” a sans-serif font face to be used in headings.  Your browser, if it's a decent one, will allow you to pick your default serif and sans-serif fonts, amongst other default display choices.  Your settings determine what size the main body text is, with the other text on the page, being sized proportionally around those settings.  As usual, Microsoft's Internet Explorer does an incredibly crap job of handling this, and often fouls up.  This page's style sheet is fine, Internet Explorer is in error, and I'm not going to do kludge my site to suit it.  You can use a better browser, though; and have a safer internet experience, too.)

Quite apart from fixed font sizes being antisocial (because it's hard for the reader to adjust them back to the size they need them to be), the implementation of absolute font sizing is fundamentally flawed in different browsers (their scaling of “points” to a consistent size doesn't work, and the concept of “pixels” sized fonts being related to a variable factor rather than an exact use of the dots on the screen, just isn't understood), and one fixed size font is not the same size on different screens (because of different display sizes and resolutions).

On my screen, as I type, I have the browser font size set at 24 pixels to get normal sized text.  Other computer users might set their font size at 14 pixels to get the same size writing, simply because their hardware is different.  And using points instead of pixels is fairly similar in behaviour (highly dependent on the software and devices being used, and widely variable between different devices).

You should be able to see that there's a large discrepancy, and when clueless webpage authors do things like set their page font size to 10 pixels, or 6 points, they make the text unreadable.  Authors should adjust their browser, not the page.

Related to font sizing is line height, and that brings in a similar array of problems.  Even worse is when users encounter a page with tiny fonts and tiny line height—while they may be able to set a minimum font size on their browser, to overcome web design stupidity, bumping the text up to readable dimensions, it's still going to be jammed into a tiny line height, leading to overlapping text.  Web browser programmers haven't given us a way to overcome this page design stupidity, yet.

Once again, this fault is due to web authors with no clue.  They don't know about other user's reading environments, and just don't understand the concept that everybody has different equipment.  Authors who can't grasp this should be beaten about the body with a two-inch thick Unix manual until they can't use the keyboard any more.

Pretty useless websites

There's a great many “pretty” useless websites.  Prettying up a website doesn't make it better, and often makes it worse.  Apart from people having different ideas about what looks nice, many people have no idea about how difficult it is to read some web pages (tiny and indistinct fonts, hard to read colour combinations, convoluted and scrambled layouts, pages that don't fit the browser window, etc.).

This page conveys a message, and it does that with words; anything else is superfluous.  Making things look prettier, may be nice (perhaps), but isn't as important as the content.  And it sure isn't going to convince some ignorant idiot to change their ways.  They're not going to get it right until they learn about what they're supposed to be doing—most won't.

Abusing HTML demonstrates that you don't know what you're doing

HTML is a mark-up language, for “marking up” sections of its content as “meaning” something.  Incorrectly using HTML elements makes a nonsense of the marked-up data, making machine assessment of the code next to impossible.

(e.g. With appropriate software, I can search through a page for definitions of something; but if the author's used the element just to make something look different, rather than using it to “define” something, then they've just broken the functionality of my software.  Likewise, I can see a summary of a page, and jump to key points on the page, by assessing the headings used on it; though only if heading elements have been properly used.  But if a page is written badly, all I can do is a plain-text word searches.)

HTML should be only used to mark-up the content, to indicate what it is.  If you want to change how something looks, then you should be playing with a “styling” language (e.g. CSS); and also realising that styling is merely a “suggestion” (it may be completely ignored by the browser), so don't rely on styling to make your page coherent.  Learn what the various WWW languages are really for, and use them properly.

Quite apart from it being the right way to do it, it's also the reliable way to do it.  Many of the kludges that ignorant designers use, instead of doing the work properly, don't actually work as intended, and cause all sorts of nasty side-effects (not to mention producing five pages worth of convoluted code, full of errors, instead of it only being half a page's worth of working HTML).  Then they put kludge on top of kludge, with different kludges for different browsers (which you can't reliably determine), instead of doing the job right in the first place.

HTML is not a page layout language!

HTML is a document “structuring” language.  It's designed to identify sections of the page as having specific meanings (as being paragraphs, lists, quoted text, etc.), which can be machine assessed, by search engines, web browsers, and other agents; not to specify that certain sections should have a specific “look.”  The fact that certain HTML elements have acquired customary ways to be rendered, is incidental.  They're “customary,” in that you can recognise things for what they are, because they're displayed similarly on different browsers (headings, lists, etc.), but they aren't displayed identically on each browser.  Very few of the HTML elements actually have a (standard) pre-defined way to be shown, therefore you can't rely on all browsers displaying elements in the same ways as each other (they don't, they're not supposed to, and future browsers may behave differently).

The proper use of HTML allows quite complex machine processing of the contents (e.g. search engines, and various browsing aids), that produce correct results (e.g. displaying a table of contents for a page, reading out structured information in a coherent manner, etc.).  Improper usage produces incorrect results, and only caters for very basic machine processing of the data (e.g. searching for words that match your query, treating the document as if it were merely plain text).

Webpages are generally seen on a display screen, with no standard size (neither the screen, nor the browser window), so many of the techniques suitable for printed media, are inappropriate.  Many are contrary to the design of HTML (of resizeable, reflowable, page layouts), and are often unachievable (e.g. “fixed” layouts), producing broken results.

e.g. I can't print pages from one of my ISP's (Optus) user account logs, on normal A4 paper, without a lot of mucking around.  They've used a layout that makes the right-hand side of the page fall off the paper.  If they'd done it properly, I'd be able to print them without any problems.  There's no “good” reason to deliberately restrict my ability to print those pages (it's information for me, about my account; and I'm the only one who can access that page).

However, if you wish to try and make a page look a particular way, then mark-up the content in the correct manner, and use styling (e.g. CSS) to try and achieve the effect that you desire.  Understand that it's only a “suggestion,” it may be ignored or overridden, and that's how it's supposed to be.  It is not your place to try and fix a layout on someone, regardless of how they wish, or need, to see the page; and generally, fixed layout attempts fail, because they don't work in all the viewing conditions that a page is seen.

The best way to support older browsers, is not to use bad HTML authoring techniques, in an attempt to misuse HTML as a page layout language; but to use CSS which can be totally ignored, producing nicer pages in the supporting browsers, and plainer pages in the other ones.  Get used to the fact that your pages will be seen in the manner that best suits the viewer, not the author.

CSS is not a new way to do stupid things

One of the first mistakes that authors make, when converting over to using CSS, is to carry on creating pages in the same hideous, user-un-friendly manner.  For instance, CSS is not a new way to do the things that you shouldn't have done with tables.

(Tables have always been a bad way to format a page, not just because they abuse the HTML table element, but they're too inflexible for presenting information that will be seen in a browsers with widely varying page widths, and it's a bad way to present information which should be displayed in a normal manner.)

CSS is for adding extra, optional, styling “suggestions,” to a page.  You cannot “force” anything upon the user (you can't really do that with HTML, either; apart from forcing a mess upon them).  CSS is designed to be optional, to not make a page useless when it's not supported (stupid authoring of pages notwithstanding), and to offer better ways of doing fancy things.

If you have a browser capable of selecting which style sheet a page will be rendered using, such as recent versions of Mozilla, Opera, Netscape, and several others, you'll see that this page has several styles, other than the default.  Several of them are there to demonstrate stupid authoring techniques.  Some will cause the page to become unreadable.

If your browser doesn't support CSS at all, then you'll see this page, and many other well authored non-CSS reliant pages, in a plain fashion, which is still easily readable.  Unlike what happens when you encounter pages stupidly authored abusing various HTML elements (like the table element), where you can do almost nothing to un-mangle the contents; and pages which have relied on other special tricks, like JavaScript, which fails to do anything unless supported, and allowed, by the browser.

Sites designed for specific resolutions show a complete failure to understand HTML viewing

HTML is designed to deliberately render in a non-specific size, to fit into whatever sized viewing window the browser has.  Different people have different screen resolutions, and browser window sizes.  You cannot make any assumption about the size of the browsers display (e.g. most people have this, so I'll make a page that uses that space); that's completely incompetent website authoring.  We aren't going to change screen resolutions to suit you, and we are going to use browser window sizes that suits us.  That's a specific design criterion of HTML; to give the viewer total control over the display of the page.

One of the purposes of HTML is to get away from the problems associated with trying to read documents that were formatted on someone else's (differently set up) computer.  Trying to read documents which require horizontal and vertical scrolling, because they're too wide, or have used multi-columnar layouts (which are inappropriate for anything other than printed material) is damn annoying.  Likewise, with documents which have become badly wrapped, due to over-wide lines, and no neat way of rewrapping them; or require an excessively wide screen to view the page.  And reading text that only occupies a thin section of the screen, for no good reason, is just as annoying to read.  Not to mention, documents which used fonts that were far too small to be read easily, or stupidly huge ones.

Writing HTML documents so that the contents and squeeze and stretch to the current width of the browser window (as they're supposed to), is the proper way to author them.  It fits the needs of the reader, to format the text in the most suitable manner for themselves; and allows the document to fit into the technical limitations of their browser.

PDF files are a curse

Making people unnecessarily read PDF files is a major pain.  They're slow to load, require a large and cumbersome program to read them, and they don't fit into the model of making pages that are most suitable for the needs of the person reading them.  What you typically get are documents with fuzzy text and images, documents that aren't easily navigable, pages sizes that don't fit the screen, and they're often partially unreadable with the user's current version of their PDF reader (mangled text, missing images, etc.).

Their only good use is for printing documents, and attempting to make them print the same for everybody.  It's not reliable, and still against the principle of providing the information for people to use in the manner that they want.  They may not want to print a file, they may not want to print a two-hundred page document (and it's quite hard just printing specific pages, as the page numbers written on the page frequently don't match the real page numbers).

PDF files have become a security risk (ways have been found to use them to compromise people's computers), and they're no guarantee that the reader is going to see an original (non-modified) document (few know how to check the veracity of a document, nor are many aware that they should even have to do so, and it's quite easy to make a counterfeit that looks so good that it wouldn't occur to readers that they might need to check its authenticity).

There's very little reason to use them, most of the justifications are based on falsities.  They rarely provide anything better than HTML, and frequently bring about a whole mess of problems.

Sites designed for specific browsers show a fundamental misunderstanding about the WWW

Your browser isn't the same as mine (neither in which browsers they are, nor how they're configured), nor do they have to be the same browser.  Using features that only work in specific browsers is ignorant of the diversity of devices that can access the WWW, restrictive towards people who can't use the device that you're insisting on, and limits your potential audience.  Do you really need that special feature?  Or did you just get hooked on playing with fancy toys?  The chances are that you don't need to do whatever it is that you're attempting.

Sites that try to determine which user-agent is accessing them, to send a different version or a lock-out notice, often get it wrong; and make a worse job than having a single version that works in all browsers.  So coming up with multi-version sites isn't the answer; and involves more hard work, which is all-too-typically a waste of effort.

Not properly encoding ampersands breaks pages

Not encoding ampersands will mean that things will break in some situations.  You can get characters appearing or disappearing in pages, and links that don't work properly.  Hoping that a browser will fix this kind of fault for you is a serious mistake (it's wrong, browsers don't do it very well, and they don't all handle it the same way—what seems to work for you, may not for other people).  If you're dynamically generating anything (links, page content, etc.) from a program that might use an ampersand, then you need to properly write that program to properly encode ampersands.  Properly encoding them will mean that they work properly in all situations.

In general, ampersands need encoding when written in HTML files, as they form the opening symbol for a character entity reference, or numerical character reference, and it may be mistaken for the start of a reference if not encoded.  There are rules about when they do and don't need encoding, but not all browsers get it right (this is a proven fact, not just a theory), and it's easier to just encode them all.  Although the rules state that they don't need encoding if the following characters cannot form part of a character reference (simplifying the explanation, somewhat), you might be using characters (after the ampersand) that can form part of a character reference without realising it (it's far easier to encode the ampersand that check whether you've used a sequence of characters that are already used in the rather extensive list of character entity references).  This is problem that needs consideration when URIs are dynamically created as a result of someone interacting with a website, especially if they're able to type something as a query into a website.

While ampersands can be used in URIs, in the right places (it's an HTML issue about encoding them when they're written within HTML pages), there's another consideration about ampersands in URIs:  They have their own special meaning within URIs, as separators between different parameters.  If you want to use an ampersand as a character, itself, in an URI, you must encode it.  If you didn't encode it, the URIs would break at the point the ampersand was, causing the latter portion to be lost.

Ampersands in the body text, and when used as separators between parameters in URIs written in HTML documents, will need encoding in most circumstances, as either the “&” character entity, or the “&” numerical character reference (character number thirty-eight, decimal, in the HTML document character set).  They're the both the same thing, it's your choice as to which to use.  But when you require an ampersand character to be used merely as an ampersand within the text of an URI, it must be encoded as “%26” (hexadecimal number two-six [representing thirty-eight] in the ASCII character set).

Between URI parameters: <a href="http://example.com/search.cgi?q=tapes&amp;beta">search</a>
As part of an URI: http://example.com/CaringForCats%26Dogs
Text in the page body: <p>Machine accepts dollar&#38;cent coins.</p>

If the first example wasn't encoded, the “&beta” portion of the URI (“tapes&beta”) could be treated as the Greek letter “β” (beta), rather than the word “beta”.  Similarly, if a search query URI written into an HTML page includes “&lang=en” (for language is English), Lynx (correctly) treats it as left angle-bracket (“&lang;”) followed by “=en” (which, obviously, results in an error).

If the second example wasn't encoded, it could be treated as a request for “http://example.com/CaringForCats”, which wouldn't exist, and it'd also be trying to supply “Dogs” as a parameter to it, as if it were sending data to a script (“/CaringForCats&Dogs”).

If the third example wasn't encoded, the “&cent” portion of the text could get treated as the “¢” symbol (e.g. you'd see “dollar¢ coins”).

All of these examples are within the realm of possibility, and what actually will happens depends on how well the browsers follow the rules, or try to correct errors (successfully, unsuccessfully, or trying to correct something that it should never have interfered with).  Though the last example wouldn't need encoding—as it really should have been typed with blank spaces either side of the ampersand (else it's very poor typing), and a blank space either side of an ampersand satisfies the rules for not needing encoding—it's a demonstration of the technique for encoding the ampersand and an example of where someone might type something so badly that it required encoding.

Not correctly specifying the character set causes reading problems

Different people use different character sets on their computers, perhaps because they're using a different type of computer than you're using, perhaps because their language has different requirements than yours, etc.  HTML accommodates this by providing a mechanism to specify what's used, and browsers can translate documents as required.  But they can only do this correctly if the characters set is properly identified, and there's only one way to do this without possibility of error—the browser must be told what character encoding is being used.  The author can explicitly specify what they used and configure the server to provide that information with the webpages, or they can author their documents to suit the existing configuration of their webserver.

The server should inform the browser, via HTTP headers, the method of encoding.  Using meta statements in the HTML has become a fall-back way of providing this information, but it cannot override the information provided by the server's HTTP headers (they're the authoritative source of information).

It's actually mandatory to specify the character set being used for HTML documents, and just as important to do so with plain text documents.  Not specifying it means that browsers have to guess (either the software guessing by itself, or the people using the browser manually reconfiguring it to try and read your page), and there's no way to be absolutely correct at playing that guessing game.

Specifying the wrong one causes even more problems.  Without the correct information the browser may display some symbols incorrectly (typically, broken punctuation), and it can also cause the entire page to be rendered as gibberish.  So despite it being required information, it's still better to completely omit it than provide the wrong data (i.e. for the few cases where it's impossible to to predetermine this information).  This should only be done in the few cases that it's really necessary, not server-wide because of webmaster laziness—that just shifts the problem on to other people, the browsers; it doesn't solve it.  The webmaster really should make an effort to either determine the right charset, or re-encode the document into one that they can specify.

NB:  Although the charset can influence the fonts being used, they're two entirely separate things.  Fooling around with one of them, trying to fix a fault with the other, is wrong.  It's just going to cause more problems, no matter if it seems to do the trick with your browser.  Set the charset correctly, then play silly games with fonts, if you absolutely must do that (and you probably don't, and really shouldn't).

Debugging websites with a browser is useless

Using your browser to test whether a site works is utterly useless (browsers are generally designed to work their way around faults, rather than indicate that the page is malformed; and this sort of testing only tests that the site works in your browser).  Likewise, telling me that your site works for you is useless, when I've told you that it doesn't work for me; I'm not at your computer, I need it to work for me, on my browser.

Use proper debugging tools (error checkers, and validators), to get sites working in all browsers, and write HTML to the ratified specifications, in the first place.

Relying on support for the latest features is shooting yourself in the foot

Not everybody will have the latest version of browsers (or other software), nor wish (or be able to) keep up with installing them.  If you use features that only work on the latest software, then you're limiting your audience.

Not only do you have user updating to contend with, but software authoring, as well.  The development of technology is usually ahead of the implementation and use of it.

Relying on special features (recent, or not) isn't sensible

Quite often the special feature isn't needed; and the tests authors try, to see if a special feature is there, are flawed.  For instance:

I commonly see JavaScript being used to test if cookies are being accepted; which fails if JavaScript isn't enabled, even if cookies are.  And those cookies often weren't really needed, nor was JavaScript needed for anything else but that test, either.  (If you really need to test for cookies, then get the server to send them, using server-side functions; and let it react to the response, or lack of response.)

Another common flaw is using JavaScript to check the contents of a form, and no checking of the data received at the server.  This leaves you wide open to abuse by hackers, or accidents from people who don't have JavaScript enabled (without it, no tests will be run to check that they're sending you the right data).

Your server is the only thing that you will know the capabilities of, always use it for any special features that you need.

Many sites use JavaScript for links between pages, when it was totally unnecessary (the links could have been done with plain HTML).  The site becomes unnavigable in any web browser which doesn't support JavaScript, has JavaScript turned off, or the page author used some non-standard method in their scripting.

JavaScript links can also make it hard to open a page in a new browser window, and prevents search engines from indexing pages (which isn't a good thing; remember that most people find WWW resources through search engines, and rarely through the front page of a website).

In most cases, a site doesn't really need any scripting running in the browser to do some task; but if it does use it, it should be done properly:  In a manner where the scripting provides an optional improvement, to when scripting is ignored; and where the page doesn't become useless without the scripting.

e.g. For the href attribute, write a normal HTML link to the resource (or to an useful alternative), so that the page is still usable without scripting in the browser; and place your scripting in the extra attributes provided for those sort of things (such as the onclick and onkeypress attributes).

Convoluted scripting isn't smart

Firstly, many people disable it on the browsers, or have it disabled for them as a company policy.  Secondly, even on reasonable fast computers, it can make them grind down to a crawl.  And thirdly, many script authors aren't too competent; writing scripts that are just plain broken, don't take into account the behaviour of web browsers other than the one the author tried it with, or are easily exploitable by hackers.

Overly complex sites are damn annoying

Trying to find your way around a convoluted site is not only annoying, but a serious time waster.  We shouldn't have to click through page after page to see something; nor to “drill down” through menu page after menu page, to what we needed; nor have to read information that's on a single topic but has been carved up into a few paragraphs per page (such stupid examples also tend to be on excruciatingly slow servers, too); nor have to read the sites “help” or “sitemap” page, to find the page that we need; nor have to deal with sites which pop up other windows, causing us to have to shuffle them all around (sites that rely on a multi-window approach fail when the browser cannot, or will not, open other windows).

e.g. Taking twenty minutes to get through to your bank account transaction listing, making you go through the log-on page, then two or three menu pages which cannot be book-marked, and use slow-to-get navigational icons, is a ridiculous process to have to go through.  Unfortunately, that's a real example; my bank's site was like that.

Particularly annoying are sites using SSL (https:// links), which are slow enough to begin with, but become tediously slow, once you've got pages with half a dozen graphics being encrypted and decrypted.  Such sites also tend to never bother to put in useful ALT text, so that you can't (simply) ignore graphics, and browse without having to wait for them.  Sites that use SSL encryption should be written in a lean manner (simple, small pages; avoid the use of graphics, or at least not depend on them; and avoiding having to wade through page after page), and shouldn't use SSL until needed (e.g. on-line shopping sites should keep the main section of the site, where you browse and pick items, as normal HTTP linked pages, then proceed to the SSL section, when it comes to the bill).

Dynamically generated sites tend to be some of the worst sites that I have to use.  They're often much slower at serving pages (and tend to have uncacheable resources, so that I can't go “back” a page, I have to re-get the entire thing), and have far more errors; as the author has more authoring work to do than flat HTML sites, and more opportunity to miswrite something; or has (again) relied on special browser features; and much of the automatically generated content is badly generated, and not fixable by the website author.

Convoluted layouts are hard to read

Webpages are not magazine pages; most people cannot see the entire page in one go, and have to scroll.  Trying to read a page which has formatted the contents into columns (with bits of information here, there, and everywhere), is annoying to read.  Having to repeatedly scroll up and down, and read little snippets spread throughout margins, around a longer article in another column, is a major pain.  Trying to read an article with other things splattered through it, is annoying.  And trying to read all of the other things that are placed higgledy piggledy around the page, is an eye strain.

Multiple windows are a thorough pest

Sites which pop up other windows, are painful.  They make older computers grind to a crawl, as they open the other window.  The pop-ups get in the way of what we're trying to read.  People have to shuffle windows around, to see what's in the other window; or madly close windows that they didn't want to see.  Sites that pop-up another window for navigation, cannot be used when the browser doesn't open another window (because it can't, or the user doesn't want it to).

Frames can be a complete nuisance

Website authors frequently make websites awkward to use, by ill-thought-out usages of frames; for instance:

Redirections are problematic

Sometimes documents move locations, and webserver administrators will set up redirectors in the old location, to forward visitors to the new location (sometimes they don't bother, and links to resources end up leading to nowhere).  Unfortunately some people shift their resources around a lot, and when you try to visit their page, you end up going through several redirections, one after another.  This shows a lack of planning, or a daft, committee-like, behaviour of continually re-organising themselves, without actually doing anything new.

This is disconcerting in itself, the first time around; as you're not sure whether you're really supposed to be where you ended up, because it wasn't the address that you tried to access.  Some browsers warn you that you're being redirected, and perhaps the site's being hijacked, and you should be cautious about trusting where you finally end up (of course, there's no way for you to determine the truth of that).  Some browsers won't obey redirections at all; because sites have been hijacked this way, before, or because some malicious sites deliberately bounce you about while you're browsing through them, it's become an exploit to be cautious about, and some people re-configure their browsers not to do allow redirects.

But it's really annoying when you get bounced in and out of secure and insecure addresses, or in and out of trusted and untrusted addresses (as per your personal browser preferences), and you have to okay each of the transfers manually (while still wondering if you should okay them, or forget about it).  The Hotmail and other MSN sites are particular woeful in this regard.

Some redirections won't work, very well, either; because the author has done them as meta refresh statements in the HTML head, rather than as HTTP headers.  More of the cautious browsers will ignore the meta refresh statements than they'll ignore the HTTP headers, though some will ignore both; and, caches will generally only pay attention to the HTTP headers.

It's not a good idea to “rely” on a redirect working, and if you must use them, you really should ensure that there's a fall-back mechanism in place.  Such as the old page stating that the location has changed, and offering a link for people to follow to the new location (for all those browsers which won't, automatically, follow the redirect).  And if a page has moved from one location to another, several times, it's much better to update the redirections on all of the old locations that point directly to the new one.  Not to bounce browsers through all the old ones, in turn, to the new one.

Ironically, this page has gone through a few changes of location before I set myself up with my own domain name and a proper webserving host.  Now I should be able to maintain a permanent address for it, even if I feel the need to switch hosts.  And, eventually, things pointing to the old address, like search engine listings, should get updated to the new address (so most visitors will come straight to the right address, instead of being redirected).  Though for some things that don't obey HTTP 301 error responses, and don't update their links, they'll still have the old address listed (the one with the 301 redirection instruction), which'll eventually cease operating, and no-longer redirect people (this is yet another reason why redirects are a problem—because of machines not obeying the instruction, and manually indexed links that aren't maintained).

Crippling standard browser features is damn annoying

Disabling right-click options does not stop people from pinching things from your site (it's already downloaded and cached; and just about anything that you can see, you can copy), but does make things damn hard for people who need to cut and paste things from the page (e.g. your e-mail address into their mail client; unknown words into dictionaries; cinema session times into messaging clients, to ask if their friend can make it as well), or use other functions in their right-click menu (navigation, accessibility aids, security checks against your site, opening links in another browser window, bookmarking, changing encodings to suit their system, refreshing frames from broken servers, and so on).

Removing the “chrome” from the browser (parts of the GUI, such as the navigation buttons, the address gadget, scroll bars, etc.), seriously annoys many people, makes it hard to use the browser, and removes options which people use to check whether they want to trust your page.

Deliberately breaking the browser navigation buttons (the back and forward buttons), or doing so through ill-conceived authoring techniques, is a major pain.  Forcing people to have to click on links on your page makes things hard for people if they can't find what they want, the situation's made even worse if you don't provide links on the page for them to get back to where they want to go.  And snaring people so they they cannot back out of your page is an atrocious way to treat your visitors.  Trying to force people to only use your site in a particular way is a hostile act.

Making links hard to find, doesn't help people use your website.  Removing underlines (because you don't think they look nice) means that links don't have the default look that people expect.  Now, they've got to hunt through the page to find where the links are (assuming that they'll bother looking to see if there are any links, given that there isn't the usual hint to show that there are any links on the page).  And, relying on links being rendered in a different colour doesn't help people who don't have coloured displays, or who are colour-blind (a large number of people are).  Another related problem situation, of settings attributes so that visited links don't change colours, makes it confusing for people trying to work out whether they've already followed a link.  Using images for links, but without making it plainly obvious that the image is a link, or what the link is for, also isn't helpful.

Hiding the destinations of links is damn annoying

For some stupid reason, some authors think it's a good idea to author a page in such as way as to make it hard to work out where a link is going to take you (e.g. by using JavaScript to nobble the status display of your browser, or writing the address in some obscure manner).  This is as irritating as hell, I want to know where I might be going to before I decide to follow a link.  It's also a stupid thing to think that you can do it, as people can undo such subterfuge.

Hiding the current location is being a pain

Some authors stupidly decide that they want to hide the current page's location by loading into a frame.  This is fruitless, in that it can be easily overcome by reloading the frame into the full window; this is annoying in that it makes it hard to keep track of where you currently are and to bookmark pages; and is breaking the standard behaviour of my browser (to display the current location).

Hiding your HTML source is impossible, and stupid

Attempting to hide your page source does not protect it from being copied, that's is utterly impossible (if I can see your page, I can copy the source, no matter what you try).  However, it does make it difficult for people to deal with broken pages, as there's no easy way for them to find out what's wrong, and do something about it (e.g. you typed a link address incorrectly, which they could have hand typed into their browser's address gadget).  It also compromises your site's ability to be indexed via a search engine (if you use some obscure method which requires a browser to decode it, like JavaScripting).

If you don't want people copying what you've done, then don't publish it.  It's as simple as that, and it's the only solution.  All assertions to the contrary, are absolutely false.

Degrading a user's security is a menace

Requiring people to degrade their security is a menace, and shows a lack of skill in web authoring that you can't achieve what you need without compromising someone else's computer.  It's exposing users to all sorts of hazards, ones which many people will not be able to repair by themselves; and may cause them to be a problem to other people (propagating viruses, etc.).  Even more so, if you're advising people to permanently degrade their default settings, rather than just make exceptions for your site.  You're also excluding a lot of people who won't, or can't modify their browser settings.

Anyone insisting that users degrade their security settings causes them to be (rightly) concerned about whether they're just incompetent, or have ulterior motives.

Requiring people to accept cookies is bad enough, without ramming masses of them down their throat.  You do not “need” to send a cookie with every image on the page, nor should you be doing that.

Cookies should be used sparingly, if at all.  I'm sick of encountering pages where I have to click on thirty-odd cookie prompts, and so are many others.  No, we're not going to blindly accept all cookies; there's far too many malicious uses of them to promote that behaviour.  If anything, we're going to block annoying cookies.

There are better ways to do things than rely on the user to store and return data.  It's often used as a breach of our privacy, and relying on the integrity of the data is leaving the server wide open to errors and abuse (editing the contents of cookies is easy to do; users will, and should, frequently remove clutter from their drives; and it can lead to private data, about the user, being stored on a computer that's not theirs).

Services which send cookies, and have to wait for responses, particularly lots of them, are also damn slow.  Making some services incredibly tedious to use.

Invading people's privacy isn't acceptable

In most cases, you don't need to know personal details of visitors; and keeping such records puts a burden of ensuring that they remain private on you, too.  Secret tracking and spying techniques are deceitful, and even doing it openly is despicable.  It's also somewhat naive to think that people won't provide false information.  The more intrusive you are, the less people are inclined to trust you, and care about doing what you want.

If you have no imagination, click here

Writing “click here” for all your links looks very amateurish, looks damn stupid, and ignores the fact that not everybody “clicks” on a link with a mouse.  It's also ignorant of how search engines index pages (they use the words displayed in the link).  Printed copies of pages look stupid, and are missing what could be vital information to understanding the document (that's your fault for bad authoring, not the user's fault for printing a page).

Make the link part of the sentence.  If you have a file for downloading, and you write about being able to download the file from here, make the phrase or the actual filename, the link.  If you're linking to a page with more information on a particular word (or phrase), make that word (or phrase) the link; writing the sentence in a normal manner, so that it reads coherently.

e.g. Download the current version of the example program, from our website.  The documentation for the example program is also available, separately.

Additionally, make the text of the link a coherent set of words in it's own right (despite any other words around it in a sentence).  The words used in the link will be used by search engines to index the resource.  i.e. Where I've used the words “the example program”, you'd use words that unmistakeably title whatever the link points to.  Simply using “download current version” won't help anybody using a search engine (or bookmark) to find what they're looking for, whereas explicitly putting the program's name into the link will.

Pandering to the computer illiterate is dumbing down the internet

Whilst there's a lot to be said for not making things complicated, but over-simplifying things, and doing things that don't need doing, isn't good, either.  For instance, you don't need to put “print this page” nor “return to the …” links on webpages.  People, then, stupidly expect to find this cruft added to all pages, instead of learning how to use their browser.  And the more cruft added, the less space is left over to fit in the real content of the page.

I've yet to come across any modern browser that doesn't provide a way to print a page from the browser's own buttons and menus, and those that don't make one readily apparent are the sort of browsers used more by true computer geeks, who'll know how to find the function by themselves.  And skillful use of CSS allows you to make pages that print differently than screen rendering, without requiring links to separate pages (i.e. so-called, additional, “printer friendly” versions of pages).  For example, printing a cinema's session time page could neatly print the session times, and not print all the navigational buttons that are not only useless on paper, but waste space so that what you want to read is squashed into a hard to read manner, or needlessly takes more pages to print.

Likewise, I've yet to come across a modern browser that doesn't let a person readily whiz up and down the page with the keyboard cursor placement keys (home, end, page up, page down, etc.).  Never mind that “return to the top” is just plain wrong if I'd never started at the top of the page; and the same for going “back” to the homepage, when I'd never been there in the first place.  Browsers already have “forward” and “back” navigational buttons, replicating their functions with more links on the page is redundant and breaks navigation:  My “back” key should take me back through the pages I've browsed, in order.  But when you put in unnecessary extra navigational links, you've added more pages in the forward direction.  Sure, put a link to the homepage, but it's a link goingto” the homepage, not “back to” it.

Search engine-unfriendly sites don't help anyone

Search engines index websites in a variety of ways:  The page titles, headings, the content on them, and what's written in links leading to pages.  They can't do their job properly, and you won't get many visitors, if you don't create a site in a way that makes for sensible indexing, or you deliberately exclude search engines from your website (my own web logs show that most people who find a page via a search engine only look at the page that they found, they don't check out other pages on the website).  Most of the things that we find on the web are through a search engine; and if you're not indexed, you won't get much traffic.

Sites that keep changing their content (relocating it, or rewriting what's on specific pages, etc.) make it impossible to find what you're after.  Site's that insist on making you drill down from their pages, instead of letting you go straight to a specific page, are a complete pain—we've got better things to do than try and find something buried within a website.  And sites with deliberately misleading information (just to get listed, somehow) are a waste of everybody's time—obnoxious sites like that deserved to get wiped out.

Cache-hostile sites are a thorough nuisance

Webserving has this brilliant feature where you can avoid downloading the same data several times over, and re-use what you've previously downloaded (the previous page, common graphics used on several pages, etc.), until some fool deliberately breaks that with anti-caching headers, meta statements, stupid expiry times, and mal-configured webservers.  Then, as you go back and forth through pages, you're forced to reload them; which is bad enough in itself, but these sorts of pages always seem to be served from the world's slowest webservers, and typically they're dynamically generated websites, with the bit that you want to read taking the longest to load, or refusing to show up until something that you don't care about loads (e.g. advertising).

Ironically, dynamically generated sites could probably benefit from caching the most—it'd significantly lighten their workload, as well as speeding up the results.  Instead of the server having to construct each page on the fly (for every single visitor to the page), it'd generate it once, and the cached result would be served to the world for a whole lot of visitors.  Of course, caching headers need to be set up appropriately (allowing caching, suitable expiry times for the type of information being served—several days or hours, not for just a few minutes).  But this (proper caching controls) should be the case, anyway, regardless of what's being served (static or dynamically generated content).

False webmasters foolishly believe that there's some advantage in preventing caching of their data.  While it may give a higher hit count to their statistics, it's still no real indicator to how many visitors they'll have (some caches will still work around such tomfoolery), and they'll increase the workload on their server (which might cost them more).  Not to mention making the browsing experience worse for many of their visitors.

As already mentioned, caching helps speed up browsing by avoiding fetching the same data over and over.  It also helps speed things up if a fast cache already has cached the contents of a web server.  Subsequent browsers can, then, use the site at a faster rate.  A simple example is an office network, where it has a slow connection to the internet, but they run their own caching proxy.  Their initial access to the internet is slow (the same as without a caching proxy), dependent on how they access the internet, but once resources have been cached, everyone can very quickly access the same information within that network, at the top speed of that network.  Which is very beneficial for when a lot of workers are directed to have a look at the same thing (whether that be the latest amusing find on the web, or information pertinent to work), they can do it quickly, and minimise their web access costs.

Making it hard for people to contact you is unhelpful

Not providing any way for people to contact you, or making it hard to find the information, is not a good idea, and very stupid for commercial websites.  If you want customers, make it easy for people to get in touch with you, with whatever methods are available.  And actually answer any mail that you get.  Also, adequately describe your products and/or services on your website, so that people will know if they want to contact you.

Not including normal, and unobscured, e-mail links, makes it hard for people to e-mail you in a normal e-mail client (which generally is the easiest way to write messages).  It means that they can't compose a message to you while they're off-line; they can't keep a record of what they sent to you; and they can't write down an e-mail address (to use later on), while looking at your website on someone else's computer.

Obscuring addresses is a really bad way of trying to minimise spam.  Not only doesn't it do much good (once your address is harvested, it doesn't need to be harvested again), it makes it hard for people to contact you.  If you must do it, then do it the least inconvenient way for people (no JavaScript, no messy images, and with the minimum of mental gymnastics and typing required to work it out).  The proper solution is do good anti-spam filtering at your end.

Not providing a contact “form” means that only people with an e-mail account, can contact you.  Likewise, if you don't allow them to specify that you contact them back using some non-internet method (e.g. over the phone, or real mail).  Many people use other people's computers (private or public ones), where they only thing that they can use is the web browser; and they mayn't have an e-mail address, or one that they can access frequently.

When using a form, do it properly.  Do not use the “mailto:” protocol in the action URI, it's invalid.  It won't work for many people.  It'll only work in broken browsers (e.g. Internet Explorer), and only if they have a mail account configured (many don't have one configured, and many people use someone else's computer—where they can't configure such things).  Make the form so it isn't dependent on a large screen size, scripting, and use a normal submit button.

Overly complex forms are seriously annoying (e.g. ones that demand certain information that you don't have, don't need to give as part of your query, or don't want to divulge).  Likewise, with ones that give you a tiny little window to try and type a message into.

Provide both normal e-mail addresses and easily usable forms.  That lets people contact you using the most convenient method, for them.

e.g. A simple write to us page, which also has links to other methods of contact.

Providing a normal postal address is a wise idea, too; particularly if you're a supplier of some service.  It helps to let people know whereabouts in the world you are (e.g. there's not much point looking through the catalogue of some foreign company who don't trade overseas, but don't make it clear where they're based), and allows people to write to you even if they don't have their own computer.  Likewise, listing fax and phone numbers is also a good idea.  Many people want to find out about products in their own time, but still want to speak to a real person, or send something in writing.

Make it easy for people to get in touch, and respond to enquiries, or lose business.

Reasons why to author websites properly:

It “future proofs” your website

Using “valid” and “appropriate” HTML (as per the freely, and publicly, published specifications), means that your pages are written “correctly,” and that they should work “correctly” in all browsers (the ones that exist now, and any others that will be created; so long as the browsers are built properly).

Get it right, in the first place, and you won't have to keep on “fixing” it, to work in newer, and different, browsers.

It's more compatible

Understand (and work with) those concepts, and your pages will work properly on different browsers.  Work against it, and they'll fail; on many browsers.  If you find that you're having to force things for certain browsers, or that you're even trying to “force” something, in the first place; then you're using HTML against its spirit, and your efforts are bound to be limited in their effectiveness, or even cause more problems.

By way of another example of HTML's “built-in” compatibility and convenience; HTML is also designed to present its contents to more than just visual mediums (e.g. aural browsers), as it already stands, without “requiring” a second version of the contents (until an author stupidly “relies” on non-textual content, a specific layout, or complicates the contents, etc.).

It's simpler

A few authoritative resources:




ECMA Script (JavaScript):



Some other resources:

(Useful information, but, perhaps, not what you'd call “authoritive”.)

Other browsers:

About this page:

This page was originally written using the W3C's Amaya browser/editor, has been checked for HTML 4.01 validity, and passed without any errors being detected.  It should work, okay, on most web browsers; and is best viewed at whatever resolution suits you.  Can you claim the same thing about your pages?

I (the author) originally published the page towards the end of 2002, at <http://homepages.ihug.com.au/~night-owl/morons.html>, re-published it on <http://members.optusnet.com.au/~night.owl/morons.html> when I changed ISPs (where, over two years, it attracted a surprising 33,459 visitors, approximately), then republished it on my “evpc.biz” website (in October, 2004), once I'd registered my own domain name.  Which was going to be it's permanent one, unless I managed to think of a domain name I preferred better than evpc.biz (which I, now, have).  Since then (November, 2006), I've registered a new domain name (cameratim.com), and the page has moved over to it.

Linking to here

Anybody linking to this page should link to the new location, using the exact address that I provide.  Do not change the sub-domain, nor add .html to the end of the address.  It's not meant to be there, no matter what you think, even if it appears to currently work.  Omitting it allows automatic HTTP content-negotiation to work.  Today it might be HTML, tomorrow it might be provided using another form of content, without needing any change in address.  Including it means that the address will only work while there's a .html file on the server.

This page's address is:  http://www.cameratim.com/personal/soapbox/morons-in-webspace

Some changes have been made to this page, to fit in with the new policy I have of constructing addresses, including anchors within pages:  All my addresses will now be written in all-lower-case, the use of anything that requires typing with the shift key will be avoided, and hyphens are used where there would normally be spaces between words.  Any links to this page using the old address, particularly ones referring to anchors on the page, will need to be updated to suit the new policy.  Currently, anchors within this page have both the new hyphenated anchors, and the old underscore-separated ones.  Don't use the old ones to refer to sections within this page, they'll get removed, eventually.

If you're reading it from someone else's re-publication, you may wish to check this location (as written just above) for updated versions, and you may inform anybody hosting copies of this page that it now has a permanent home (here).

You're free to copy this page (in whole or part), put it on websites, send it to clueless website authors, or do anything else useful that you can think of doing with it.  It's a single stand-alone file, and doesn't “need” the associated CSS files; though you're free to copy them, too.  Nor does it “need” the other pages that this page links to; and copying the linked “write to the author” (me) page wouldn't do you much good, anyway.  I recommend that you read the page source, and modify any links to things which aren't suitable (such as the links to my homepage and my “write to the author” page; and links to my stylesheets, if you don't copy them, as well).  There's comments in the HTML, near those things, to give you some help.  You do not need to seek my permission, nor have to attribute the page to me, though you cannot claim it as being yours (because it simply is not).  If you feel the need to attribute it, then I suggest that you simply mention that the page was copied from this site, at whatever date that you did so.

Linking to some resources on this website is not advised; as their URI may not be permanent (that's one reason why it's good to ask before linking, you find out these things, and can arrange to avoid problems).  But this page, or one very like it, will remain on my website, accessible, providing that people don't refer to it using an incorrect address.

Homepage, personal section, soapbox sub-section.