ec2-3-236-142-143.compute-1.amazonaws.com | ToothyWiki | RecentChanges | Login | Webcomic


Vitenka really really dislikes JavaScript.  Enough to turn it off by default, and to prefer netscape 2 over most other browsers.
Some other people popped in with some reasons why JavaScript can be useful.  Some of them are reasonable.
Interestingly, the whole discussion stayed from a functionality point of view - the 'scripting versus strong' language argument didn't happen.  Which is good.
Consensus appears to be that as long as your script degrades properly (i.e. the page stays functional when it is turned off) then there are some sensible uses for it.
I still argue that DetectingBrowser is not one of them - but have moved that (as suggested) to a NewPage?.

I'll leave the rest of this discussion in.

EvilScript incarnate.

There is no legitimate reason for using javascript on your site.
That depends entirely on your goal when designing the site.  If your goal is simply to provide solid, textual, information in the quickest way possible you are correct.  Many, and perhaps the majority, have other goals in mind. - Kazuhiko
If you want to entertain, then there are better things than javascript.  If you want to do business, then you move everything server side or you get very very badly stung.  If your site is information, then text and graphics is sufficient.  Why do you need javascript?
Yes, business/information retrieval is server side and, as I said, for pure information text and graphics are sufficient.  A lot of sites exist for purposes other than for giving you information.  A large number are of the "I am wonderful" variety, be they for individuals or businesses and are trying to sell _something_.  For these purposes you use graphic tricks, advertising tricks and whatever can make your site look good/better than the others.  Banner ads are a pain but bearable, spawning windows are hideous but, other than that, anything that makes the user's time on a site a pleasant experience is a good thing.  I find rollovers actually quite useful (unless, of course, they are of the wait 10 seconds before the rollover image actually appears variety) as they highlight where you are, what you are doing and what options you have available.  Similarly, drop downs as a form of navigation can also help.  As with all things, JavaScript is extremely easy to abuse, but that doesn't make it necessarily evil - Kazuhiko
Ok, I'll take that final point.  But unless we have some kinda 'site certificated to use javascript in a non-annoying way' tag, it's baby and bathwater time.  The primary problem I have with 'enhanced' navigation (of which the worst offender is flash) is either slow loading time (can't go anywhere, images not loaded) or the damn thing doesn't degrade properly.  Having to open up the page in notepad because they forgot to put in an alternative to the dropdown is kinda irritating.  Oh, and I forgot to add that javascript adds state to your page, which makes bookmarks and saving far less useful.  (You CAN cock state up on server side, but it's harder)  And finally - trying to make yourself or your site 'look good' is a bad thing.  Have we not yet learnt the 'style over substance' lesson of the dot coms?

Which is a shame, because it could actually do fairly useful things (such as checking you've filled out a form sensibly) - but its use in rollovers and advert banners and spawning popup windows has rendered it unuseable.

AlexChurchill jumps into the fray.  I certainly agree all JS requiring pages should degrade properly.  But I want to suggest some places where using JavaScript is neither evil nor a bad thing...
These were the options I said 'it has SOME valid uses for'...  --Vitenka
So it's OK to wait 30 seconds (typically longer than that, if the choice of smileys is worth it) for a window full of smileys to pop up, but not OK to wait 30 seconds (typically less than that, if the site designers are halfway competent) for input validation on your credit card details (which is going to need to be reimplemented serverside anyway, because if it's not someone will feed the server malicious data)? - MoonShadow (who's not necessarily against the principle, just wondering about the seeming inconsistency in the position described)
Well, ideally, the credit card details should be typed directly into a secure connection.  That they are present in the browser at all is a bad thing.  I'm on the client side, I consider smileys to be frivolous and credit cards to be important, so I want credit cards dealt with in as seeming clear a way as possible.  I think I missed a reply when pasting my replies in here - though.  I originally had some cutting remarks about never wanting to visit a site with graphical smileys at all ;)  I guess the point is that I *have* to wait for the credit card to say 'ok - really done' but am also willing for it to make that process easier.  But I don't care much about smileys, so would rather it took ages to pop-up so that I can decide not to bother with them.  Having said that, if the credit card form is cruddily done and *requires* javascript, it won't get my number.  (Hmm.. an odd point, javascript pops-up a warning if a script tries to access a password field, since it could do bad things with that data - but has no way of knowing if a field is a creidt card field)
AC: I stand by both of my earlier statements.  Usability is the key concept.  If having the 30-seconds-to-appear helper window lets you do what you want more easily in the 6 minutes you take to write your post, then it's probably saved you more than 30 seconds, and it's worth it anyway.  While I know people disagree on some precise details of what makes an interface usable, there are a lot of common points which almost everyone will agree makes the use of your GUI easier or more pleasant.  In some cases this will be when you save the user time (by warning them the line is too long, or things like, I don't know, syntax highlighting in the code they enter - JS can't do that yet, I don't think, but I'm sure it won't be long).  In other cases this will be optional things in which the user can invest a little time (letting the smiley window popup) in order to make the use of the form easier or more pleasant after that.  --AlexChurchill, suspecting a "discussion" page on GUIUsability may be in order
Might not be a bad idea.  I agree that it's a user choice hting - but javascript is far broader in scope, you'd need a very clever system for it to be able to GIVE you that user choice.  What would have been nice wuold have been if frame, iframes and popup windows had all been the same tag, and the browser got to choose which it liked.  As to the "won't use a site due to presentation" - this is true.  Presentation can add nothing to a site, but it can sure take a lot away.

Ignoring popup windows being evil in and of themselves
Why... because you know if you tried to defend that POV you'd be forced to back down and admit it's nothing more than a persona' preference??  :) --AC
PersonaPreference??  I like the turn of your typo. --Vitenka
TARGET= allows any normal hyperlink to target an existing browser window.  (Prior discussion snipped, but this little handy hint left in)
You're right, in fact, and that's something the Wizards site do.  I apologise - I had my facts wrong in my first half of the above bullet point.  --AC
That this is horribly incompatible with my having forty browser windows open is a bit of a problem, but easily solved if all sites used sensible naming conventions (sitename_widowname for example)  Quickly inserting things into edit fields is, I guess, a semi valid solution.  Although any site using graphical smileys is one I'd rather avoid...
Who was it who was arguing for content over presentation?  Now you're saying you'd rather avoid a site based on its presentation, irrespective of the content... and in a case where the presentation is doing nothing but providing extra options for the users??  -- a rather-too-provocative AC, please forgive
<Insert standard "are graphical smileys really content?" and "signal to noise ratio of 4k email containing text vs 4k email containing a single graphical smiley" rants here.> - MoonShadow, who is hypocritically responsible for putting a toothycat.net logo on every single wiki page.

// EvilScript
var os, i, interface, user;
os = window.application.runningUnder();

for (i=0; i<os.hardwareItems.length; i++) {
  if (os.hardwareItems(i).isHumanInterface) {
    interface = os.hardwareItems(i);

if (interface != null) {
  user = interface.getUser();


alert('10 seconds and counting...');

Is it possible to query element dimensions calculated by the browser from Javascript, or at least query the pixel height of an em in a font or something? I'd like to size an element to fill the rest of its container, but can see no way of calculating how big the other elements in that container are unless I give them pixel sizes - in which case I'd have to force the fonts I use to particular pixel sizes, which is Evil.. - MoonShadow
Not sure I understand the question correctly, but if I do, would returning document.getElementById?('id').style.fontSize and/or document.getElementById?('id').offsetHeight work? --Rachael

Suppose a page is in the middle of processing, say, an onClick function. Could the processing be interrupted by, say, an onMouseOver? function? If so, will processing of the onClick resume after the onMouseOver? has happened, or will it be abandoned? And does anyone have any tips for avoiding these kind of issues? Will I have to use an event queue like the ToothyGDL does? --AlexChurchill
Dunno what the spec says, but in reality... Undefined.  I have seen implementations which only process new javascript once the previous function has finished - serialising it and preventing interrupts.  I have also seen implementations that do the 'interrupt and forget about the previous' ComeFrom? thing.  --Vitenka
(MoonShadow) Definitely in MSIE; I have been burnt by it many times. Wouldn't surprise me if it also did it in Mozilla, though I've not tried to reproduce it. In MSIE, processing will return to where it was interrupted, but of course if the interrupted code was working on state that the interrupting event updated, things blow up - it's a breeding ground for classic race conditions. I discovered with early implementations of the RSS ticker that cases of setTimeout interrupting onClick / onMouseDown? were very easily reproducible, so hanging lumps of code from both that did processing on the same state was ill-advised. Moreover, JS has no explicit support for serialization. This kind of thing is precisely the reason why ToothyGDL arranges for all its nonatomic processing to only ever be triggerable by one class of event (namely, setTimeout; all other events add things to a worklist in an atomic manner that setTimeout empties in an atomic manner). Pre/post-inc/decrement as well as array push, pop, shift and unshift all appear to be atomic both in MSIE and Mozilla; this should be sufficient for whatever you need to do. If you don't want to implement an event queue, an alternative way would be to implement critical sections like this:

var semaphore = 0;
  // critical section goes here
  // a critical section is already running. 
  // write whatever code you need here that will, 
  // atomically and without interfering with critical state, 
  // cause us to be called again later;
  // or nothing at all if it's OK to just drop this call

[Tricking users into uploading arbitrary files.]

FireFox seems to automatically append a "Browse" button to the second input, which is rather telling. --CH
You could probably relatively-position an element to overlay the Browse button (I found this linked from [an article on styling file upload controls]). I agree with the author that browsers should confirm files to be uploaded. --B


ec2-3-236-142-143.compute-1.amazonaws.com | ToothyWiki | RecentChanges | Login | Webcomic
Edit this page | View other revisions | Recently used referrers
Last edited February 15, 2007 11:53 am (viewing revision 28, which is the newest) (diff)