Closed
Bug 16968
Opened 25 years ago
Closed 25 years ago
[DOGFOOD] bugzilla doesn't go to the next bug automatically
Categories
(Core :: Networking: Cookies, defect, P3)
Tracking
()
VERIFIED
FIXED
M11
People
(Reporter: dp, Assigned: jud)
Details
(Whiteboard: [PDT+])
If I query for say my M11 buglist, and I modify one bug, usually it would go to
the next bug on my list. I dont see bugzilla doing that if I use seamonkey.
I this bugzilla is using cookies to do this list thing. My best guess is cookies
or come multipart-replace
This is a blocker because I cant use mozilla and manage my buglist. Managing
bugs is about 80% of my use of the browser.
Updated•25 years ago
|
Status: NEW → ASSIGNED
Target Milestone: M11
Updated•25 years ago
|
Whiteboard: [PDT+]
Updated•25 years ago
|
Status: ASSIGNED → NEW
Comment 1•25 years ago
|
||
dp is absolutely correct that bugzilla is using cookies for this. Under 4.x
with the cookie warning message turned on, I see that a cookie is being set as
follows:
1- go to bugzilla.mozilla.org
2- click on query existing bug reports
3- cookie warning box appears: BUGLIST cookie is set to null
4- enter a request (e.g., put "morse" in "assigned to" field
5- press "submit query"
6- cookie warning box appears: BUGLIST cookie is set to a list of bugs.
Running the same test under 5.0 I observe steps 1 through 5 but no cookie
warning box appears in step 6. (Note: since the cookie warning box in 5.0
doesn't give the details of the cookie, I used the cookie viewer to verify that
the BUGLIST cookie was actually set to null in step 3.)
So I took a step back to see why the cookie wasn't being set. It's because the
response received back from the site does not include a set-cookie header at
step 6 (it does include it at step 3). I determined this by instrumenting the
SetHeader routines in nsHTTPRequest and nsHTTPResponse to print out headers.
Here is what I observed:
HEADERS AT STEP 3:
--REQUEST: <host> <bugzilla.mozilla.org>
--REQUEST: <accept> <*/*>
--REQUEST: <user-agent> <Mozilla/5.0 [en-US] (Windows_NT; I)>
--REQUEST: <referer> <http://bugzilla.mozilla.org/>
--RESPONSE: <date> <Fri, 22 Oct 1999 15:59:23 GMT>
--RESPONSE: <server> <Apache/1.3.4 (Unix)>
--RESPONSE: <set-cookie> <BUGLIST=>
--RESPONSE: <connection> <close>
--RESPONSE: <content-type> <text/html>
HEADERS AT STEP 6:
--REQUEST: <host> <bugzilla.mozilla.org>
--REQUEST: <accept> <*/*>
--REQUEST: <user-agent> <Mozilla/5.0 [en-US] (Windows_NT; I)>
--REQUEST: <referer> <http://bugzilla.mozilla.org/query.cgi>
--REQUEST: <cookie> <BUGLIST=>
--RESPONSE: <date> <Fri, 22 Oct 1999 15:59:46 GMT>
--RESPONSE: <server> <Apache/1.3.4 (Unix)>
--RESPONSE: <connection> <close>
--RESPONSE:<content-type><multipart/x-mixed-replace;boundary=thisrandomstring>
-----
For the record, here is how I instrumented the SetHeader routine in
nsHTTPRequest.cpp (similar instrumentation was put into nsHTTPResponse.cpp).
NS_METHOD
nsHTTPRequest::SetHeader(nsIAtom* i_Header, const char* i_Value)
{
/* display headers as they are being set */
nsString atom;
i_Header->ToString(atom);
char* s = atom.ToNewCString();
printf("--REQUEST: <%s> <%s>\n",s, i_Value);
return mHeaders.SetHeader(i_Header, i_Value);
}
Reporter | ||
Comment 2•25 years ago
|
||
So is the moral of the story, something wrong with the JS that is setting the
cookie as per your observation of:
+ --RESPONSE: <set-cookie> <BUGLIST=>
Should we start lookin at the JS in the page to see why this aint happening...
Updated•25 years ago
|
Status: NEW → ASSIGNED
Comment 3•25 years ago
|
||
The <set-cookie> <BUGLIST=> in step 3 was perfectly normal -- this is also what
was observed in the cookie-warning box of the 4.x browser. The problem is that
there was no set-cookie at all at step 6.
And it is not a javascript issue. The page has no javascript. The cookies that
are being set are all being done in the http headers. The question is did the
site hold back the set-cookie header for some reason or did netlib drop it
someplace before it made it to the SetHeader call in nsHTTPResponse.cpp.
Assignee | ||
Comment 4•25 years ago
|
||
My money is on the http server not sending it. My guess is we're malforming a
cookie value, or the server just doesn't like our request, and I'd bet on the
former (I thought I saw a cookie value munging bug floating around)
Comment 5•25 years ago
|
||
I suspect that this might be the same as bug 16258. However I'm not prepared to
mark it as a duplicate yet. I'm pursuing both bugs right now.
Updated•25 years ago
|
Assignee: morse → valeski
Status: ASSIGNED → NEW
Comment 6•25 years ago
|
||
Ignore above comment. This is not the same problem as 16258. From the traffic,
I captured using TracePlus I now know what is going on with this bug. (Thanks,
Judson, for telling me about and sending me the TracePlus tool for capturing
traffic -- I could never have zeroed in on this without it.)
The traffic received from the server when the submit button is pressed (step
5)is as follows:
HTTP/1.1 200 OK
Date: Tue, 26 Oct 1999 20:47:27 GMT
R Server: Apache/1.3.4 (Unix)
Connection: close
Content-Type: multipart/x-mixed-replace;boundary=thisrandomstring
--thisrandomstring
Content-type: text/html
<p>Please stand by ... <p>
--thisrandomstring
Content-type: text/html
Set-Cookie:
BUGLIST=16968:15903:9419:9594:12609:12787:13022:14025:14889:14932:16258:16686:16
873:17120:8530
<HTML><HEAD>
...
Everything is fine up to the "Content-Type: multipart" line. All headers
preceding that are indeed reflected inside the browser by calls to SetHeader
(see the instrumentation on SetHeaders above). But those headers following it
are not. In particular, the Set-Cookie header is not and that is why the cookie
with the entire bug list is not getting set.
Judson believes that the implementation of multipart is not complete. In
particular, we currently are not recognizing headers (set-cookie in particular)
in the multipart. So more implementation work has to be done in necko in order
for all this to work.
That's as far as I can take this. Am now assigning bug to Judson as his
request.
Assignee | ||
Updated•25 years ago
|
Assignee: valeski → gagan
Assignee | ||
Comment 7•25 years ago
|
||
This is nasty. The multipart mixed replaced stream converter needs a way to set
headers on an http request. unfortunately the converters come into play *after*
http has received the headers and notified observers accordingly.
Gagan,
If I envoke the http observers via the netmodulemanager inside the multimixed
converter I need a channel (http channel) to pass headers onto the observers.
Can I create an http channel and set response headers on it without actually
connecting?
Another way of doing this would be to access the cookie service directly; hmmm.
I thought we agreed on letting someone set headers on a channel past the
OnHeadersAvailable mark and that we will fire an additional OnHeadersAvailable
for someone setting it (that part may be missing right now) but the stream
convertors should look at the channel (in the stream obs/listeners) and use it
to set new headers back onto the channel which will then be broadcast again and
things will work fine from that point onwards.
Assignee | ||
Comment 9•25 years ago
|
||
After whacking on HTTP a bit, I've made some changes to reflect the solution
described by gagan. However, they include the following new addition to
nsIHTTPChannel, SetResponseHeader(). This obviously exposes a piece of HTTP that
we were actively trying to keep closed to the outside world.
The stream convertres have access only to an nsIChannel. I have the MIME Mixed
Replace converter QI the channel for the http channel, then call
SetResponseHeader() on it with the set-cookie header (it's easy to expand this
to handle any header). nsHTTPChannel::SetResponseHeader() now does all the
things nsHTTPResponseListener::FireOnHeaderAvail() used to do (and
FireONHeadersAvail() just calls nsHTTPChannel's OnHeadersAvail().
I haven't tested this yet (busted build elsewhere) but I'm concerned the extra
OnHeaders() notifications are going to break observers.
Assignee | ||
Updated•25 years ago
|
Assignee | ||
Comment 10•25 years ago
|
||
I can't test my fix until I can hit submit
Assignee | ||
Comment 11•25 years ago
|
||
I got around the submit problem by entering the submit url directly (it's a GET
vs a POST). My fix works. Gagan rev'd. I'll checkin on green.
Assignee | ||
Updated•25 years ago
|
Status: ASSIGNED → RESOLVED
Closed: 25 years ago
Resolution: --- → FIXED
Assignee | ||
Comment 12•25 years ago
|
||
fix checked in 10/28/99 1:15 pac time
Assignee | ||
Updated•25 years ago
|
Status: RESOLVED → REOPENED
Assignee | ||
Comment 13•25 years ago
|
||
crap. I just noticed a piece of this didn't make it in. It will go in shortly w/
my next checkin.
Assignee | ||
Updated•25 years ago
|
Resolution: FIXED → ---
Assignee | ||
Comment 14•25 years ago
|
||
ok now it's fixed for real :). checked in 10/29/99 1:55pm pac time
Assignee | ||
Updated•25 years ago
|
Status: REOPENED → RESOLVED
Closed: 25 years ago → 25 years ago
Resolution: --- → FIXED
Updated•25 years ago
|
Status: RESOLVED → VERIFIED
Comment 15•25 years ago
|
||
Verified working - Linux 1999120808
You need to log in
before you can comment on or make changes to this bug.
Description
•