Protecting Your Cookies: HttpOnly



XMLHttpRequest.prototype.__defineGetter__('getAllResponseHeaders', function(){ });

be a good workaround for Firefox’s issue?


Now they’ll have to copy the html of the login screen with the expired session message and have their javascript output that instead of stealing the cookies.


So, where’s the post on what exactly was wrong with the html sanitizer and how exactly you fixed it? Or is that too narrow focus for the blog. :slight_smile:


It’s a difficult scenario, because you can’t just clobber every questionable thing that comes over the wire from the user.

You do.


@Jonah hits it on the head. There’s always a way to hijack a session, even if it’s walking up to an unattended computer. Any critical / costly action should require the user to retype their password (or some secondary authentication method).

At least HttpOnly can try to limit what browser scripts can do, and it’s a step in the right direction, but as others point out, it’s not yet a total fix.

Another fix I can think of would be if running script couldn’t load other scripts on the fly (ie via eval), and your web framework would inspect every pages output and remove any scripts references / and perhaps even not allow inline scripts, you could be a lot safer.

Unfortunately there are a lot of vectors of attack, and it’s currently very easy for a developer to screw up.


@ Jonah, I think you’re right - when making profile changes etc a password should always be required. Good point indeed.

I’m looking at implementing commenting etc on my site (its a blog, but I don’t care too much about the parsing of the blog entries since I’m the only one making them). I was wondering where the author stated that you can’t clobber every questionable thing that comes through - Why not? I was thinking of just parsing all angle brackets to their entity codes and then running through the script to look for acceptable tags, but instead of looking at tag letters between and brackets, I’d look at letters between entity codes.

Is there anything glaringly wrong with this approach? Script stuff wouldn’t get through because I would only allow specific tags such as strong, em, u, strike etc.


If you expect your users to want BLOCKQUOTE, UL, and OL, you should probably be using a smarter text markup language (MediaWiki-esque) in the first place. ISTR Jeff was considering this route for stackoverflow a while back, but rejected it.

My list of HTML tags needed in this blog comment section: P, B, I, S (and possibly STRIKE), (might as well throw in U for completeness), TT (or CODE), PRE, A HREF (with obviously well-formed URLs only). A NAME is too abusable. SMALL might be nice, but it’s abusable. BIG is too abusable.


RE Another fix and remove any scripts references I neglected to say the web framework would remove any script references that you didn’t explicitly allow, which outside of DNS poisoning would pretty much nail the coffin around many XSS.

Then again, it’s all a giant band-aid as everything is sent plain-text. It takes only 1 compromised router…


It’s actually not that complicated. URL encode everything. so for example b becomes lt;b/gt;, etc… then selectively use text substitution to reenable what you want, i.e. ‘lt;bgt;’ = 'b’
Everything else remains escaped



There seems to be a huge emphasis on cookie stealing, but don’t forget that XMLHttpObject is extremely dangerous since it can mimic any user action! What if an XSS script loads the user profile form, changes the email address, and then requests a new password be sent to it (via the common Forgot password form)? The account is hijacked without even touching a cookie. Place additional security around these sensitive areas and do not rely solely on the HttpOnly directive.


I think what you should implement is explained clearly in the following paper:


I had not that much clue about cookies. This article has opened alot of things for me. Thanks buddy.


Why not keep a dictionary that maps the cookie credential to the IP used when the credential was granted, and make sure that the IP matches the dictionary entry on every page access?

I’m surprised this isn’t a standard practice… is there some gotcha to this I haven’t thought of? I’m not a web developer myself, so there could be a simple yeah but to this solution.

The problem with this is users with a fast switching dynamic IP would be continuosly prompted to login again.


I wouldn’t worry about users with fast switching dynamic IPs. They have a bigger issue that’ll plague them until they find a real ISP.


I see a lot of comments wondering where Jeff has gotten to. Well, I think it’s a tad unfair and unrealistic to expect him to be able to post numerous times per week for the rest of eternity. I’ll happily keep checking back each day until a new post appears :slight_smile:

I’m sure normal service will resume, he’s probably busy at work, or if not, having some much deserved time to himself.


Well it seems pretty likely that he’s trying to put off his next post until it can be the debut of Stack Overflow.

Which, you know, I’m actually in favor of. Looks like a useful site, from the screenshots I’ve seen.

But it’s been now well over a week since the last post. Obviously things are taking a little more time than he thought. Which again, I understand – how many times have we all been in that position where we’re JUST about ready to send something off and, oh, there’s a little bug here, and oh, just gotta remember to fix that up here, and oh, CRAP the whole thing is falling apart now, and oh, damn damn damn, and oh…

But come on Jeff! We’re your lifeblood here. We are your base! Throw us a bone. Most of us aren’t in on the Beta so we’re just sitting here with a dead blog, and the coming soon page on is pretty uninspiring. We got nothing! If it isn’t a sure thing that you’ll have something spectacular soon (as in, by Monday), at least just give us a little head’s up, something to gnaw on…

(Unless, of course, Jeff has been hit by a car or some other unforeseeable tragedy. In which case I eat my hat.)


Seems to me like the perfect time for a post about maintaining relationships with your clients. Stack Overflow may become a success or might fail, but you’ve become a name by providing regular posts on Now you are moving on to bigger and better things you should not neglect the people who have afforded you the opportunity to make this your career choice.
Your posts of late have been infrequent and in all honesty not up to your usual standard. We, your readers, are still here. But we won’#t be forever.


If you’re able to forget to HTML-encode some user input, you’re probably also just concatenating strings of text. If you build your pages using proper XML tools, there is no conceivable way that you can accidentally include unsanitized user input in the page.


Besides the fact (which you mention) that you can still perform other, non-cookie-related XSS attacks, there is another way to bypass httpOnly protections, regardless of the browser - using XSS to do Cross-Site Tracing (XST) attack.
If the server supports the TRACE method, the malicious script can send a TRACE request and parse the response (which will contain the cookie).
Worse yet, even if the server does not support TRACE, but one of the proxies on the way does (can be reverse proxy, or even the user’s organizational proxy), XST can still be accomplished by sending the TRACE request to the proxy…

BUT regardless of XST, I still highly recommend using httpOnly. At least it will block non-XST attacks…


This blog post is wrong on one key issue - ie7 is still very vulnerable to the XMLHttpRequest exposure of HTTPOnly cookies via response headers.

The fact is, the only browser that locks down this vector is ie8 beta - but FireFox 3.1 will surely lock down this vector.

The latest version of ie7 (as of this writing)7.0.6001.18000 still exposes HTTPOnly cookies via set-cookie headers in XMLHttpRequest.getAllResponseHeaders()

The latest version of ie8 beta 2 (as of this writing)8.0.6001.18241 also exposes HTTPOnly cookies via set-cookie headers in XMLHttpRequest.getAllResponseHeaders() - FireFox 3.1 is on track to support this hole, see: