bigboard r7337 - in trunk/bigboard/libgmail: . CVS ClientCookie ClientCookie/CVS demos demos/CVS



Author: marinaz
Date: Mon May 12 18:42:13 2008
New Revision: 7337
URL: http://svn.gnome.org/viewvc/bigboard?rev=7337&view=rev

Log:
libgmail library from 
http://libgmail.cvs.sourceforge.net/libgmail/libgmail/
revision 1.102 of libgmail.py, just after release 0.1.9


Added:
   trunk/bigboard/libgmail/
   trunk/bigboard/libgmail/ANNOUNCE
   trunk/bigboard/libgmail/CHANGELOG
   trunk/bigboard/libgmail/COPYING
   trunk/bigboard/libgmail/CVS/
   trunk/bigboard/libgmail/CVS/Entries
   trunk/bigboard/libgmail/CVS/Entries.Log
   trunk/bigboard/libgmail/CVS/Repository
   trunk/bigboard/libgmail/CVS/Root
   trunk/bigboard/libgmail/ClientCookie/
   trunk/bigboard/libgmail/ClientCookie/CVS/
   trunk/bigboard/libgmail/ClientCookie/CVS/Entries
   trunk/bigboard/libgmail/ClientCookie/CVS/Repository
   trunk/bigboard/libgmail/ClientCookie/CVS/Root
   trunk/bigboard/libgmail/ClientCookie/_BSDDBCookieJar.py
   trunk/bigboard/libgmail/ClientCookie/_ClientCookie.py
   trunk/bigboard/libgmail/ClientCookie/_ConnCache.py
   trunk/bigboard/libgmail/ClientCookie/_Debug.py
   trunk/bigboard/libgmail/ClientCookie/_HeadersUtil.py
   trunk/bigboard/libgmail/ClientCookie/_LWPCookieJar.py
   trunk/bigboard/libgmail/ClientCookie/_MSIECookieJar.py
   trunk/bigboard/libgmail/ClientCookie/_MSIEDBCookieJar.py
   trunk/bigboard/libgmail/ClientCookie/_MozillaCookieJar.py
   trunk/bigboard/libgmail/ClientCookie/_Opener.py
   trunk/bigboard/libgmail/ClientCookie/_Request.py
   trunk/bigboard/libgmail/ClientCookie/_Util.py
   trunk/bigboard/libgmail/ClientCookie/__init__.py
   trunk/bigboard/libgmail/ClientCookie/_urllib2_support.py
   trunk/bigboard/libgmail/MANIFEST.in
   trunk/bigboard/libgmail/README
   trunk/bigboard/libgmail/demos/
   trunk/bigboard/libgmail/demos/COPYING
   trunk/bigboard/libgmail/demos/CVS/
   trunk/bigboard/libgmail/demos/CVS/Entries
   trunk/bigboard/libgmail/demos/CVS/Repository
   trunk/bigboard/libgmail/demos/CVS/Root
   trunk/bigboard/libgmail/demos/MakeTarBall.py   (contents, props changed)
   trunk/bigboard/libgmail/demos/README
   trunk/bigboard/libgmail/demos/archive.py   (contents, props changed)
   trunk/bigboard/libgmail/demos/filelist
   trunk/bigboard/libgmail/demos/folderlist
   trunk/bigboard/libgmail/demos/gcp.py   (contents, props changed)
   trunk/bigboard/libgmail/demos/gmailftpd.py   (contents, props changed)
   trunk/bigboard/libgmail/demos/gmailpopd.py   (contents, props changed)
   trunk/bigboard/libgmail/demos/gmailsmtp.py   (contents, props changed)
   trunk/bigboard/libgmail/demos/readmail.py
   trunk/bigboard/libgmail/demos/sendmsg.py   (contents, props changed)
   trunk/bigboard/libgmail/demos/test_fwd_attach.py   (contents, props changed)
   trunk/bigboard/libgmail/demos/test_notifier.py   (contents, props changed)
   trunk/bigboard/libgmail/demos/unreadmsgcount.py   (contents, props changed)
   trunk/bigboard/libgmail/gmail_transport.py
   trunk/bigboard/libgmail/lgconstants.py
   trunk/bigboard/libgmail/lgcontacts.py
   trunk/bigboard/libgmail/libgmail.py   (contents, props changed)
   trunk/bigboard/libgmail/mkconstants.py   (contents, props changed)
   trunk/bigboard/libgmail/setup.py
   trunk/bigboard/libgmail/test_contacts.py
   trunk/bigboard/libgmail/testlibgmail.py

Added: trunk/bigboard/libgmail/ANNOUNCE
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ANNOUNCE	Mon May 12 18:42:13 2008
@@ -0,0 +1,47 @@
+ANN: libgmail 0.0.8 - Gmail access via Python - POP3 Proxy added!
+
+libgmail -- Python binding for Google's Gmail service
+
+<http://libgmail.sf.net/>
+
+The `libgmail` project is a pure Python binding to provide access to
+Google's Gmail web-mail service.
+
+The library currently ships with a demonstration utility to archive
+messages from a Gmail account into mbox files, suitable for importing
+into a local email client.
+
+Also includes a demonstration utility that acts as a SMTP proxy to
+allow mail to be sent from any standard mail client that uses SMTP
+(e.g. Mail.app, Mozilla etc). (Now handles attachments.)
+
+New demonstration utility acts as a POP3 proxy to allow mail to be
+retrieved from any standard mail client that uses POP3 (e.g. Mail.app,
+Mozilla etc).
+
+Features demonstration utility to provide access to Gmail message
+attachments via a download-only FTP proxy--this allows retrieval of
+suitably marked attachments by a standard FTP client. Utilize more
+of your Gmail space!
+
+License: GPL 2.0 (gmailftpd.py/gmailpopd.py are dual licensed with PSF)
+
+Major changes since 0.0.7:
+
+ *  Fixed login to work again after it was broken by a Gmail change.
+
+ *  Added trash/delete message thread & trash/delete single message
+    functionality. (By request.)
+
+ *  POP3 proxy server demo. (By request.)
+
+ *  Added `GmailLoginFailure` exception to enable tidier handling of
+    login failures (which could be bad username/password or a Gmail
+    change).
+
+
+<p><a href="http://libgmail.sf.net/";>libgmail 0.0.8</a> - The
+`libgmail` project is a pure Python binding to provide access to
+Google's Gmail web-mail service; includes SMTP, POP3 & FTP
+proxies. (23-Aug-04)</p>
+

Added: trunk/bigboard/libgmail/CHANGELOG
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/CHANGELOG	Mon May 12 18:42:13 2008
@@ -0,0 +1,311 @@
+== Version 0.1.9 ==
+libgmail.py
+    * Fixed login that was broken for a bunch of new
+      gmail accounts, thanks to a patch by rhauer
+
+NOTE: libgmail now depends on ClientCookie, which
+can be downloaded from: 
+  http://wwwsearch.sourceforge.net/ClientCookie/#download
+
+== Version 0.1.8 ==
+libgmail.py
+    * Added 'search' method to contactLists that returns
+      an array of contacts who match a given search term
+      (at some point, the contacts API is long overdue
+       for a revamp, but for now, hey, why not)
+      This is a patch by Alex Chiang     --WD--
+    * libgmail now asks for the old Gmail interface,
+      so that it isn't broken by the new Gmail updates.
+      (Thanks to Aaron and Stu for work on this)
+      (Fixes SF bug #1822662)
+
+== Version 0.1.7 ==
+libgmail.py
+gmail_transport.py
+    * Applied patch that adds proxy support, both 
+      for passwordless and password-ful proxies 
+      (is that a word?), by Jose Rodriguez --WD+SZ--
+
+== Version 0.1.6.2 ==
+libgmail.py
+    * Bugfix for attachment problems --WD--
+      (SF Bug #1793026, Patch #1799605 by 'stephster')
+archive.py
+    * Protect messages with a "from" line in them --WD--
+      (SF Patch #1790809 by 'scop')
+
+== Version 0.1.6.1 == 
+libgmail.py
+    * Bugfix for login problems --WD--
+
+== Version 0.1.6 == 
+libgmail.py
+    * Added support for "Gmail Apps" aka "Gmail For Your Domain" --WD--
+
+== Version 0.1.5.1 ==
+libgmail.py
+    * Minor bugfix release -- logging in with the wrong
+      username and password caused a crash instead of
+      the appropriate thrown exception --WD--
+
+== Version 0.1.5 ==
+libgmail.py
+    * Fixed exception in the testcode (SF bug #1486703) --SZ--
+    * Fixed broken login caused by slight format change
+      (SF Bug #1534275 - Thanks, anonymous tipster!) --WD--
+    * Added another attribute to the message class: to
+      (SF Bug #1528766) --WD--
+    * Fixed problems caused by repeated commas
+      (SF Bug #1512361) --SZ--
+
+== Version 0.1.4 ==
+libgmail.py
+    * Started new contacts code. --SZ--
+    * Bugfix involving 404 error raised when trying to send
+      an email (SF bug #1398323) --WD--
+    * Bugfix for broken len() iterator in GmailSearchResult
+      (SF bug #1365166) --WD--
+    * Bugfix for improper marking of messages as read
+      (SF bug #1365188) --WD--
+
+    NOTE: Expect an improved Contacts API in the next release.
+          We will strive for backwards-compatibility where
+          possible, but be prepared for possible changes.
+          Please feel free to contact us if you have
+          questions/comments/concerns about this.
+ 
+== Version 0.1.3.3 ==
+libgmail.py
+    * Fixed some bugs in the return values of the label methods. --SZ--
+
+== Version 0.1.3.2 ==
+libgmail.py
+ * Added some attributes to the message class: cc, bcc, sender. --SZ--
+ * Fixed the value returnt by a __len__ call to the threads. --SZ--
+ * Fixed bug in the sendmessage result --SZ--
+ * Added a exception catch to the getUnreadMsgCount method. --SZ--
+ * Added a method to only retrieve the unread messages from the inbox. --SZ--
+ 
+== Version 0.1.3.1 ==
+libgmail.py
+ * Fixed the problem that not all the messages from a thread were
+    returnt. --SZ--
+ * Added a exception catch for a "500 server error" --SZ--
+
+== Version 0.1.3 ==
+ libgmail.py
+ * Fixed bugs that crashes libgmail when accessing an empty account --SZ--
+ * Fixed returning not all the messages in large accounts. --SZ-- 
+
+== Version 0.1.2 ==
+libgmail.py
+ * Added a \r to the line endings in the VCard export function. This is done
+   to comply with rfc2425 section 5.8.1 --SZ--
+ * Fixed a security bug in the page parser. --SZ--
+
+ 
+== Version: 0.1.1 ==
+All
+ * Renamed the shabang to use the 'env' program in all executables. --SZ--
+ * Fixed the redirect bug caused by the changed Gmail login pages. --WD--
+
+== Version: 0.1.0 ==
+libgmail.py
+ * Added contacts support. --WD--
+ * Added contacts test suite. --WD--
+ * Added finer-grained debugging control --WD--
+ * Applied patch that handles login redirect URL properly now
+   Login now works. --WD--
+ * Removed fork message. It was a left over from the initial forking. --SZ--
+
+constants.py
+ * Renamed to lgconstants.py to avoid name conflicts --WD--
+
+== Version: 0.0.8 (23 August 2004) ==
+libgmail.py
+ *  Fixed login to work again after it was broken by a Gmail change.
+    Centralised cookie extraction. Added debug-level logging of cookie
+    extraction & storage.
+
+ *  Add trash/delete message thread functionality to account object.
+
+constants.py, libgmail.py, mkconstants.py
+ *  Add trash/delete single message functionality to account object.
+
+demos/gmailpopd.py
+ *  Initial rough POP3 proxy server demo. Works with Mail.app when I
+    tried it... :-) Sometimes causes items to be downloaded even when
+    they don't *really* need to be. Causes some items to be marked as
+    read even if the client doesn't actually request them.
+
+ *  Refactored message retrieval from account snapshot to allow
+    partial message retrieval (for TOP functionality).
+
+ *  Added POP3 TOP command functionality which is required by Mozilla as it
+    (wrongly) doesn't work with the absolute minimum command set
+    specified by the RFC and requires TOP.
+
+ *  Fixed copy/paste error to change 'ftp_QUIT' to 'pop_QUIT'.
+
+ *  Moved byte-stuffing and message massaging into separate functions.
+
+libgmail.py, demos/archive.py, demos/gmailftpd.py, demos/gmailpopd.py, demos/gmailsmtp.py, demos/sendmsg.py
+ *  Added `GmailLoginFailure` exception to enable tidier handling of
+    login failures (which could be bad username/password or a Gmail
+    change).
+
+ *  Updated demos to catch `GmailLoginFailure` exception.
+
+ *  Removed non-supported "LOGIN" authentication method in SMTP demo
+    that was included in the server capability response in error.
+
+ANNOUNCE
+ *  Minor typo fix.
+
+
+== Version: 0.0.7 (03 August 2004) ==
+
+constants.py, mkconstants.py
+ *  Added attachment related constants. 
+
+libgmail.py, demos/gmailsmtp.py
+ *  Allow file data to be specified directly (rather than via an on-
+    disk file) when specifying attachments (this allows using existing
+    Message instance payloads mostly directly). Modify SMTP Proxy demo
+    to handle sending attachments.
+
+demos/gmailftpd.py
+ *  Initial import of Gmail attachments FTP Proxy! 
+
+libgmail.py
+ *  Corrected version info for previous release. 
+
+ *  Added 'getMessagesByQuery' function. Added initial attachment
+    retrieval handling. Clean up handling of references to parent
+    objects & account objects. Version info update.
+
+ *  Handle sending attachments. Works, but implementation is extremely
+    *cough* sub-optimal...
+
+ *  Don't try to attach files if there are none. 
+
+
+== Version: 0.0.6 (15 July 2004) ==
+
+demos/gmailsmtp.py
+ *  That was too easy, there oughta be a law! Thanks to Python's
+    undocumented SMTP server module we can now send mail with a
+    standard mail client via (E)SMTP. Extended standard SMTP class to
+    handle ESMTP EHLO & AUTH PLAIN commands.
+
+libgmail.py
+ *  Added utility function '_retrieveJavascript' to 'GmailAccount' to
+    help developers who want to look at it. (In theory also so you can
+    regenerate 'constants.py' but the Javascript Gmail now uses isn't
+    actually useful for that anymore...) (Added by request.)
+
+
+== Version: 0.0.5 (11 July 2004) ==
+
+libgmail.py, demos/sendmsg.py
+ *  Added functionality to enable message sending. Modified automatic
+    cookie handling. Added command line example to send a message.
+    Enabled page requests to be either a URL or a Request instance.
+
+constants.py, mkconstants.py
+ *  Added more useful constants.
+
+
+== Version: 0.0.4 (11 July 2004) ==
+
+constants.py, mkconstants.py
+ *  Include standard folder/search name constants. 
+
+ *  Add more useful constants. 
+
+constants.py, libgmail.py, mkconstants.py
+ *  Added category name retrieval. 
+
+mkconstants.py
+ *  'mkconstants' isn't really useful anymore with the new JS version.
+
+libgmail.py
+ *  Add ability to get number of unread messages. 
+
+ *  Handle items that might be 'bunched' such as thread lists better. 
+
+ *  Only warn about mismatched Javascript versions once module import.
+    (Note: This may mean the Javascript version may change more than
+    once in a session and the second change won't be warned, but that
+    shouldn't be much of an issue...)
+
+ *  Refactor URL construction. Refactor query/search operation in
+    preparation for adding searches.
+
+ *  More refactoring. Made thread search query more generic to allow
+    use by (to come) label searches etc. Threads now belong to
+    'GmailSearchResult' instances rather than folders. Threads now
+    retrieve their own messages rather than relying on their parent to
+    do so.
+
+ *  We now refer to categories as labels, as the UI does. Enable
+    retrieval by label.
+
+libgmail.py, demos/archive.py
+ *  Allow all pages of results to be returned for a 'getFolder'
+    request. (Not tested much.)
+
+ *  Provide easy access to standard folder names. Added length
+    property to folders. Examples now handle empty folders gracefully.
+
+ *  Now uses 'getMessagesByXXXXX' style method names for folders &
+    labels. Now refer to original message source as 'source' & not
+    'body'. Enable demos to search by folder name or label name.
+
+
+
+== Version: 0.0.3 (8 July 2004) ==
+
+libgmail.py
+ * Allow username to be specified on the command line instead of prompting.
+ * Rough special case handling of when more than one set of thread information data is present on a page (seemed to occur when using 'all' search after a certain number of items). TODO: Make this fix work at the page parsing level, but splitting all tuples into individual items.
+ * Add cookie handling code to enable us to remove requirement for ClientCookie package. (Especially for Adrian... :-) )
+
+demos/archive.py
+ * *Extremely* rough mbox creation--turns out the mails retrieved had '\r' characters at the end of the headers. The mbox file appears to be successfully imported by OS X's Mail.app client.
+ * Allow username to be specified on the command line instead of prompting.
+
+
+== Version: 0.0.2a (~6 July 2004) ==
+
+* No code change, renamed to try to avoid SourceForge mirroring problems.
+
+
+== Version: 0.0.2 (5 July 2004) ==
+
+constants.py
+ * Useful constants from the Gmail Javascript code as Python module.
+ * Update to match current live Javascript.
+ * Fudge some enumerations that we need to start at 0.
+
+libgmail.py
+ * Refactor to make use of Folder/Thread/Message model. Standardised some naming. Make use of imported Gmail constants. Centralise page retrieval & parsing.
+ * Calculate number of messages in thread.
+ * Refactor & reorganise code. Minor style edits. Refine design of folder, thread & message classes. Modify folders, threads & messages to be as lazy as possible when it comes to retrieving data from the net. Enable message instances to retrieve their original mail text. Add Gmail implementation notes. Hide password entry. Demo now displays threads & messages.
+ * Version date change.
+
+mkconstants.py
+ * Tool to make useful constants from the Gmail Javascript code available via a Python module.
+ * Fudge some enumerations that we need to start at 0.
+
+demos/archive.py
+ * Initial rough demo to archive all messages into text files.
+
+CHANGELOG
+ * Added.
+
+
+== Version: 0.0.1 (2 July 2004) ==
+
+libgmail.py
+ * Initial import of version 0.0.1 (as posted in comp.lang.python).

Added: trunk/bigboard/libgmail/COPYING
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/COPYING	Mon May 12 18:42:13 2008
@@ -0,0 +1,340 @@
+		    GNU GENERAL PUBLIC LICENSE
+		       Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.
+     59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+			    Preamble
+
+  The licenses for most software are designed to take away your
+freedom to share and change it.  By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users.  This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it.  (Some other Free Software Foundation software is covered by
+the GNU Library General Public License instead.)  You can apply it to
+your programs, too.
+
+  When we speak of free software, we are referring to freedom, not
+price.  Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+  To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+  For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have.  You must make sure that they, too, receive or can get the
+source code.  And you must show them these terms so they know their
+rights.
+
+  We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+  Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software.  If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+  Finally, any free program is threatened constantly by software
+patents.  We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary.  To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.
+
+		    GNU GENERAL PUBLIC LICENSE
+   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+  0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License.  The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language.  (Hereinafter, translation is included without limitation in
+the term "modification".)  Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope.  The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+  1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+  2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+    a) You must cause the modified files to carry prominent notices
+    stating that you changed the files and the date of any change.
+
+    b) You must cause any work that you distribute or publish, that in
+    whole or in part contains or is derived from the Program or any
+    part thereof, to be licensed as a whole at no charge to all third
+    parties under the terms of this License.
+
+    c) If the modified program normally reads commands interactively
+    when run, you must cause it, when started running for such
+    interactive use in the most ordinary way, to print or display an
+    announcement including an appropriate copyright notice and a
+    notice that there is no warranty (or else, saying that you provide
+    a warranty) and that users may redistribute the program under
+    these conditions, and telling the user how to view a copy of this
+    License.  (Exception: if the Program itself is interactive but
+    does not normally print such an announcement, your work based on
+    the Program is not required to print an announcement.)
+
+These requirements apply to the modified work as a whole.  If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works.  But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+  3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+    a) Accompany it with the complete corresponding machine-readable
+    source code, which must be distributed under the terms of Sections
+    1 and 2 above on a medium customarily used for software interchange; or,
+
+    b) Accompany it with a written offer, valid for at least three
+    years, to give any third party, for a charge no more than your
+    cost of physically performing source distribution, a complete
+    machine-readable copy of the corresponding source code, to be
+    distributed under the terms of Sections 1 and 2 above on a medium
+    customarily used for software interchange; or,
+
+    c) Accompany it with the information you received as to the offer
+    to distribute corresponding source code.  (This alternative is
+    allowed only for noncommercial distribution and only if you
+    received the program in object code or executable form with such
+    an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it.  For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable.  However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+  4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License.  Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+  5. You are not required to accept this License, since you have not
+signed it.  However, nothing else grants you permission to modify or
+distribute the Program or its derivative works.  These actions are
+prohibited by law if you do not accept this License.  Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+  6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions.  You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+  7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all.  For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices.  Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+  8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded.  In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+  9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time.  Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number.  If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation.  If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+  10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission.  For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this.  Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+			    NO WARRANTY
+
+  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+		     END OF TERMS AND CONDITIONS
+
+	    How to Apply These Terms to Your New Programs
+
+  If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+  To do so, attach the following notices to the program.  It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+    <one line to give the program's name and a brief idea of what it does.>
+    Copyright (C) <year>  <name of author>
+
+    This program is free software; you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation; either version 2 of the License, or
+    (at your option) any later version.
+
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+
+    You should have received a copy of the GNU General Public License
+    along with this program; if not, write to the Free Software
+    Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+    Gnomovision version 69, Copyright (C) year  name of author
+    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+    This is free software, and you are welcome to redistribute it
+    under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License.  Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary.  Here is a sample; alter the names:
+
+  Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+  `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+  <signature of Ty Coon>, 1 April 1989
+  Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs.  If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library.  If this is what you want to do, use the GNU Library General
+Public License instead of this License.

Added: trunk/bigboard/libgmail/CVS/Entries
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/CVS/Entries	Mon May 12 18:42:13 2008
@@ -0,0 +1,14 @@
+/ANNOUNCE/1.8/Sun Aug 22 12:25:37 2004//
+/CHANGELOG/1.32/Wed Apr 30 21:50:20 2008//
+/COPYING/1.1/Wed Aug 10 05:47:44 2005//
+/MANIFEST.in/1.1/Wed Apr 30 21:49:14 2008//
+/README/1.10/Mon Jul 16 03:59:11 2007//
+/gmail_transport.py/1.1/Sun Oct  7 13:59:15 2007//
+/lgconstants.py/1.2/Sat Jan  7 11:04:49 2006//
+/lgcontacts.py/1.1/Sat Jan  7 11:04:49 2006//
+/libgmail.py/1.102/Wed May  7 22:50:10 2008//
+/mkconstants.py/1.11/Tue Aug 16 06:43:47 2005//
+/setup.py/1.9/Wed Apr 30 21:49:14 2008//
+/test_contacts.py/1.1/Sat Jan  7 11:04:49 2006//
+/testlibgmail.py/1.9/Mon Jul 16 03:59:11 2007//
+D

Added: trunk/bigboard/libgmail/CVS/Entries.Log
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/CVS/Entries.Log	Mon May 12 18:42:13 2008
@@ -0,0 +1,2 @@
+A D/ClientCookie////
+A D/demos////

Added: trunk/bigboard/libgmail/CVS/Repository
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/CVS/Repository	Mon May 12 18:42:13 2008
@@ -0,0 +1 @@
+libgmail

Added: trunk/bigboard/libgmail/CVS/Root
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/CVS/Root	Mon May 12 18:42:13 2008
@@ -0,0 +1 @@
+:pserver:anonymous libgmail cvs sourceforge net:/cvsroot/libgmail

Added: trunk/bigboard/libgmail/ClientCookie/CVS/Entries
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/CVS/Entries	Mon May 12 18:42:13 2008
@@ -0,0 +1,15 @@
+/_BSDDBCookieJar.py/1.1/Tue Apr  8 22:33:18 2008//
+/_ClientCookie.py/1.1/Tue Apr  8 22:33:18 2008//
+/_ConnCache.py/1.1/Tue Apr  8 22:33:18 2008//
+/_Debug.py/1.1/Tue Apr  8 22:33:18 2008//
+/_HeadersUtil.py/1.1/Tue Apr  8 22:33:18 2008//
+/_LWPCookieJar.py/1.1/Tue Apr  8 22:33:18 2008//
+/_MSIECookieJar.py/1.1/Tue Apr  8 22:33:18 2008//
+/_MSIEDBCookieJar.py/1.1/Tue Apr  8 22:33:18 2008//
+/_MozillaCookieJar.py/1.1/Tue Apr  8 22:33:18 2008//
+/_Opener.py/1.1/Tue Apr  8 22:33:18 2008//
+/_Request.py/1.1/Tue Apr  8 22:33:18 2008//
+/_Util.py/1.1/Tue Apr  8 22:33:18 2008//
+/__init__.py/1.1/Tue Apr  8 22:33:18 2008//
+/_urllib2_support.py/1.1/Tue Apr  8 22:33:18 2008//
+D

Added: trunk/bigboard/libgmail/ClientCookie/CVS/Repository
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/CVS/Repository	Mon May 12 18:42:13 2008
@@ -0,0 +1 @@
+libgmail/ClientCookie

Added: trunk/bigboard/libgmail/ClientCookie/CVS/Root
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/CVS/Root	Mon May 12 18:42:13 2008
@@ -0,0 +1 @@
+:pserver:anonymous libgmail cvs sourceforge net:/cvsroot/libgmail

Added: trunk/bigboard/libgmail/ClientCookie/_BSDDBCookieJar.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/_BSDDBCookieJar.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,180 @@
+"""Persistent CookieJar based on bsddb standard library module.
+
+Copyright 2003-2006 John J Lee <jjl pobox com>
+
+This code is free software; you can redistribute it and/or modify it
+under the terms of the BSD or ZPL 2.1 licenses (see the file
+COPYING.txt included with the distribution).
+
+**********************************************************************
+THIS IS NOT FULLY TESTED!
+**********************************************************************
+
+"""
+
+from _ClientCookie import CookieJar, MappingIterator
+from _Debug import getLogger
+debug = getLogger("ClientCookie").debug
+
+import bsddb
+import cPickle
+pickle = cPickle
+del cPickle
+
+try: StopIteration
+except NameError:
+    from _ClientCookie import StopIteration
+
+def CreateBSDDBCookieJar(filename, policy=None):
+    """Return a BSDDBCookieJar given a BSDDB filename.
+
+    Use this unless rather than directly using the BSDDBCookieJar constructor
+    unless you know what you're doing.
+
+    filename: filename for sleepycat BSDDB database; if the file doesn't exist,
+     it will be created; otherwise, it will be opened
+
+    **********************************************************************
+    BSDDBCookieJar IS NOT FULLY TESTED!
+    **********************************************************************
+
+    """
+    db = bsddb.db.DB()
+    db.open(filename, bsddb.db.DB_HASH, bsddb.db.DB_CREATE, 0666)
+    return BSDDBCookieJar(policy, db)
+
+class BSDDBIterator:
+    # XXXX should this use thread lock?
+    def __init__(self, cursor):
+        iterator = None
+        self._c = cursor
+        self._i = iterator
+    def __iter__(self): return self
+    def close(self):
+        if self._c is not None:
+            self._c.close()
+        self._c = self._i = self.next = self.__iter__ = None
+    def next(self):
+        while 1:
+            if self._i is None:
+                item = self._c.next()
+                if item is None:
+                    self.close()
+                    raise StopIteration()
+                domain, data = item
+                self._i = MappingIterator(pickle.loads(data))
+            try:
+                return self._i.next()
+            except StopIteration:
+                self._i = None
+                continue
+    def __del__(self):
+        # XXXX will this work?
+        self.close()
+
+class BSDDBCookieJar(CookieJar):
+    """CookieJar based on a BSDDB database, using the standard bsddb module.
+
+    You should use CreateBSDDBCookieJar instead of the constructor, unless you
+    know what you're doing.
+
+    Note that session cookies ARE stored in the database (marked as session
+    cookies), and will be written to disk if the database is file-based.  In
+    order to clear session cookies at the end of a session, you must call
+    .clear_session_cookies().
+
+    Call the .close() method after you've finished using an instance of this
+    class.
+
+    **********************************************************************
+    THIS IS NOT FULLY TESTED!
+    **********************************************************************
+
+    """
+    # XXX
+    # use transactions to make multiple reader processes possible
+    def __init__(self, policy=None, db=None):
+        CookieJar.__init__(self, policy)
+        del self._cookies
+        if db is None:
+            db = bsddb.db.DB()
+        self._db = db
+    def close(self):
+        self._db.close()
+    def __del__(self):
+        # XXXX will this work?
+        self.close()
+    def clear(self, domain=None, path=None, name=None):
+        if name is not None:
+            if (domain is None) or (path is None):
+                raise ValueError(
+                    "domain and path must be given to remove a cookie by name")
+        elif path is not None:
+            if domain is None:
+                raise ValueError(
+                    "domain must be given to remove cookies by path")
+
+        db = self._db
+        self._cookies_lock.acquire()
+        try:
+            if domain is not None:
+                data = db.get(domain)
+                if data is not None:
+                    if path is name is None:
+                        db.delete(domain)
+                    else:
+                        c2 = pickle.loads(data)
+                        if name is None:
+                            del c2[path]
+                        else:
+                            del c2[path][name]
+                else:
+                    raise KeyError("no domain '%s'" % domain)
+        finally:
+            self._cookies_lock.release()
+    def set_cookie(self, cookie):
+        db = self._db
+        self._cookies_lock.acquire()
+        try:
+            # store 2-level dict under domain, like {path: {name: value}}
+            data = db.get(cookie.domain)
+            if data is None:
+                c2 = {}
+            else:
+                c2 = pickle.loads(data)
+            if not c2.has_key(cookie.path): c2[cookie.path] = {}
+            c3 = c2[cookie.path]
+            c3[cookie.name] = cookie
+            db.put(cookie.domain, pickle.dumps(c2))
+        finally:
+            self._cookies_lock.release()
+    def __iter__(self):
+        return BSDDBIterator(self._db.cursor())
+    def _cookies_for_request(self, request):
+        """Return a list of cookies to be returned to server."""
+        cookies = []
+        for domain in self._db.keys():
+            cookies.extend(self._cookies_for_domain(domain, request))
+        return cookies
+    def _cookies_for_domain(self, domain, request, unverifiable):
+        debug("Checking %s for cookies to return", domain)
+        if not self._policy.domain_return_ok(domain, request, unverifiable):
+            return []
+
+        data = self._db.get(domain)
+        if data is None:
+            return []
+        cookies_by_path = pickle.loads(data)
+
+        cookies = []
+        for path in cookies_by_path.keys():
+            if not self._policy.path_return_ok(path, request, unverifiable):
+                continue
+            for name, cookie in cookies_by_path[path].items():
+                if not self._policy.return_ok(cookie, request, unverifiable):
+                    debug("   not returning cookie")
+                    continue
+                debug("   it's a match")
+                cookies.append(cookie)
+
+        return cookies

Added: trunk/bigboard/libgmail/ClientCookie/_ClientCookie.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/_ClientCookie.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,1687 @@
+"""HTTP cookie handling for web clients, plus some other stuff.
+
+This module originally developed from my port of Gisle Aas' Perl module
+HTTP::Cookies, from the libwww-perl library.
+
+Docstrings, comments and debug strings in this code refer to the
+attributes of the HTTP cookie system as cookie-attributes, to distinguish
+them clearly from Python attributes.
+
+                        CookieJar____
+                        /     \      \
+            FileCookieJar      \      \
+             /    |   \         \      \
+ MozillaCookieJar | LWPCookieJar \      \
+                  |               |      \
+                  |   ---MSIEBase |       \
+                  |  /      |     |        \
+                  | /   MSIEDBCookieJar BSDDBCookieJar
+                  |/    
+               MSIECookieJar
+
+Comments to John J Lee <jjl pobox com>.
+
+
+Copyright 2002-2006 John J Lee <jjl pobox com>
+Copyright 1997-1999 Gisle Aas (original libwww-perl code)
+Copyright 2002-2003 Johnny Lee (original MSIE Perl code)
+
+This code is free software; you can redistribute it and/or modify it
+under the terms of the BSD or ZPL 2.1 licenses (see the file
+COPYING.txt included with the distribution).
+
+"""
+
+VERSION = "1.3.0"
+
+
+# Public health warning: anyone who thought 'cookies are simple, aren't they?',
+# run away now :-(
+
+import sys, re, urlparse, string, copy, time, struct, urllib, types
+try:
+    import threading
+    _threading = threading; del threading
+except ImportError:
+    import dummy_threading
+    _threading = dummy_threading; del dummy_threading
+import httplib  # only for the default HTTP port
+
+MISSING_FILENAME_TEXT = ("a filename was not supplied (nor was the CookieJar "
+                         "instance initialised with one)")
+DEFAULT_HTTP_PORT = str(httplib.HTTP_PORT)
+
+try: True
+except NameError:
+    True = 1
+    False = 0
+
+try:
+    from types import UnicodeType
+except ImportError:
+    UNICODE = False
+else:
+    UNICODE = True
+
+try: StopIteration
+except NameError:
+    class StopIteration(Exception): pass
+
+import ClientCookie
+from _HeadersUtil import split_header_words, parse_ns_headers
+from _Util import startswith, endswith, isstringlike, getheaders
+from _Debug import warn, getLogger
+debug = getLogger("ClientCookie.cookies").debug
+
+try: bool
+except NameError:
+    def bool(expr):
+        if expr: return True
+        else: return False
+
+try: issubclass(Exception, (Exception,))
+except TypeError:
+    real_issubclass = issubclass
+    from _Util import compat_issubclass
+    issubclass = compat_issubclass
+    del compat_issubclass
+
+def reraise_unmasked_exceptions(unmasked=()):
+    # There are a few catch-all except: statements in this module, for
+    # catching input that's bad in unexpected ways.
+    # This function re-raises some exceptions we don't want to trap.
+    if not ClientCookie.USE_BARE_EXCEPT:
+        raise
+    unmasked = unmasked + (KeyboardInterrupt, SystemExit, MemoryError)
+    etype = sys.exc_info()[0]
+    if issubclass(etype, unmasked):
+        raise
+    # swallowed an exception
+    import traceback, StringIO
+    f = StringIO.StringIO()
+    traceback.print_exc(None, f)
+    msg = f.getvalue()
+    warn("ClientCookie bug!\n%s" % msg)
+
+
+IPV4_RE = re.compile(r"\.\d+$")
+def is_HDN(text):
+    """Return True if text is a host domain name."""
+    # XXX
+    # This may well be wrong.  Which RFC is HDN defined in, if any (for
+    #  the purposes of RFC 2965)?
+    # For the current implementation, what about IPv6?  Remember to look
+    #  at other uses of IPV4_RE also, if change this.
+    return not (IPV4_RE.search(text) or
+                text == "" or
+                text[0] == "." or text[-1] == ".")
+
+def domain_match(A, B):
+    """Return True if domain A domain-matches domain B, according to RFC 2965.
+
+    A and B may be host domain names or IP addresses.
+
+    RFC 2965, section 1:
+
+    Host names can be specified either as an IP address or a HDN string.
+    Sometimes we compare one host name with another.  (Such comparisons SHALL
+    be case-insensitive.)  Host A's name domain-matches host B's if
+
+         *  their host name strings string-compare equal; or
+
+         * A is a HDN string and has the form NB, where N is a non-empty
+            name string, B has the form .B', and B' is a HDN string.  (So,
+            x.y.com domain-matches .Y.com but not Y.com.)
+
+    Note that domain-match is not a commutative operation: a.b.c.com
+    domain-matches .c.com, but not the reverse.
+
+    """
+    # Note that, if A or B are IP addresses, the only relevant part of the
+    # definition of the domain-match algorithm is the direct string-compare.
+    A = string.lower(A)
+    B = string.lower(B)
+    if A == B:
+        return True
+    if not is_HDN(A):
+        return False
+    i = string.rfind(A, B)
+    has_form_nb = not (i == -1 or i == 0)
+    return (
+        has_form_nb and
+        startswith(B, ".") and
+        is_HDN(B[1:])
+        )
+
+def liberal_is_HDN(text):
+    """Return True if text is a sort-of-like a host domain name.
+
+    For accepting/blocking domains.
+
+    """
+    return not IPV4_RE.search(text)
+
+def user_domain_match(A, B):
+    """For blocking/accepting domains.
+
+    A and B may be host domain names or IP addresses.
+
+    """
+    A = string.lower(A)
+    B = string.lower(B)
+    if not (liberal_is_HDN(A) and liberal_is_HDN(B)):
+        if A == B:
+            # equal IP addresses
+            return True
+        return False
+    initial_dot = startswith(B, ".")
+    if initial_dot and endswith(A, B):
+        return True
+    if not initial_dot and A == B:
+        return True
+    return False
+
+cut_port_re = re.compile(r":\d+$")
+def request_host(request):
+    """Return request-host, as defined by RFC 2965.
+
+    Variation from RFC: returned value is lowercased, for convenient
+    comparison.
+
+    """
+    url = request.get_full_url()
+    host = urlparse.urlparse(url)[1]
+    if host == "":
+        host = request.get_header("Host", "")
+
+    # remove port, if present
+    host = cut_port_re.sub("", host, 1)
+    return string.lower(host)
+
+def eff_request_host(request):
+    """Return a tuple (request-host, effective request-host name).
+
+    As defined by RFC 2965, except both are lowercased.
+
+    """
+    erhn = req_host = request_host(request)
+    if string.find(req_host, ".") == -1 and not IPV4_RE.search(req_host):
+        erhn = req_host + ".local"
+    return req_host, erhn
+
+def request_path(request):
+    """request-URI, as defined by RFC 2965."""
+    url = request.get_full_url()
+    #scheme, netloc, path, parameters, query, frag = urlparse.urlparse(url)
+    #req_path = escape_path(string.join(urlparse.urlparse(url)[2:], ""))
+    path, parameters, query, frag = urlparse.urlparse(url)[2:]
+    if parameters:
+        path = "%s;%s" % (path, parameters)
+    path = escape_path(path)
+    req_path = urlparse.urlunparse(("", "", path, "", query, frag))
+    if not startswith(req_path, "/"):
+        # fix bad RFC 2396 absoluteURI
+        req_path = "/"+req_path
+    return req_path
+
+def request_port(request):
+    host = request.get_host()
+    i = string.find(host, ':')
+    if i >= 0:
+        port = host[i+1:]
+        try:
+            int(port)
+        except ValueError:
+            debug("nonnumeric port: '%s'", port)
+            return None
+    else:
+        port = DEFAULT_HTTP_PORT
+    return port
+
+# Characters in addition to A-Z, a-z, 0-9, '_', '.', and '-' that don't
+# need to be escaped to form a valid HTTP URL (RFCs 2396 and 1738).
+HTTP_PATH_SAFE = "%/;:@&=+$,!~*'()"
+ESCAPED_CHAR_RE = re.compile(r"%([0-9a-fA-F][0-9a-fA-F])")
+def uppercase_escaped_char(match):
+    return "%%%s" % string.upper(match.group(1))
+def escape_path(path):
+    """Escape any invalid characters in HTTP URL, and uppercase all escapes."""
+    # There's no knowing what character encoding was used to create URLs
+    # containing %-escapes, but since we have to pick one to escape invalid
+    # path characters, we pick UTF-8, as recommended in the HTML 4.0
+    # specification:
+    # http://www.w3.org/TR/REC-html40/appendix/notes.html#h-B.2.1
+    # And here, kind of: draft-fielding-uri-rfc2396bis-03
+    # (And in draft IRI specification: draft-duerst-iri-05)
+    # (And here, for new URI schemes: RFC 2718)
+    if UNICODE and isinstance(path, types.UnicodeType):
+        path = path.encode("utf-8")
+    path = urllib.quote(path, HTTP_PATH_SAFE)
+    path = ESCAPED_CHAR_RE.sub(uppercase_escaped_char, path)
+    return path
+
+def reach(h):
+    """Return reach of host h, as defined by RFC 2965, section 1.
+
+    The reach R of a host name H is defined as follows:
+
+       *  If
+
+          -  H is the host domain name of a host; and,
+
+          -  H has the form A.B; and
+
+          -  A has no embedded (that is, interior) dots; and
+
+          -  B has at least one embedded dot, or B is the string "local".
+             then the reach of H is .B.
+
+       *  Otherwise, the reach of H is H.
+
+    >>> reach("www.acme.com")
+    '.acme.com'
+    >>> reach("acme.com")
+    'acme.com'
+    >>> reach("acme.local")
+    '.local'
+
+    """
+    i = string.find(h, ".")
+    if i >= 0:
+        #a = h[:i]  # this line is only here to show what a is
+        b = h[i+1:]
+        i = string.find(b, ".")
+        if is_HDN(h) and (i >= 0 or b == "local"):
+            return "."+b
+    return h
+
+def is_third_party(request):
+    """
+
+    RFC 2965, section 3.3.6:
+
+        An unverifiable transaction is to a third-party host if its request-
+        host U does not domain-match the reach R of the request-host O in the
+        origin transaction.
+
+    """
+    req_host = request_host(request)
+    # the origin request's request-host was stuffed into request by
+    # _urllib2_support.AbstractHTTPHandler
+    return not domain_match(req_host, reach(request.origin_req_host))
+
+
+class Cookie:
+    """HTTP Cookie.
+
+    This class represents both Netscape and RFC 2965 cookies.
+
+    This is deliberately a very simple class.  It just holds attributes.  It's
+    possible to construct Cookie instances that don't comply with the cookie
+    standards.  CookieJar.make_cookies is the factory function for Cookie
+    objects -- it deals with cookie parsing, supplying defaults, and
+    normalising to the representation used in this class.  CookiePolicy is
+    responsible for checking them to see whether they should be accepted from
+    and returned to the server.
+
+    version: integer;
+    name: string;
+    value: string (may be None);
+    port: string; None indicates no attribute was supplied (eg. "Port", rather
+     than eg. "Port=80"); otherwise, a port string (eg. "80") or a port list
+     string (eg. "80,8080")
+    port_specified: boolean; true if a value was supplied with the Port
+     cookie-attribute
+    domain: string;
+    domain_specified: boolean; true if Domain was explicitly set
+    domain_initial_dot: boolean; true if Domain as set in HTTP header by server
+     started with a dot (yes, this really is necessary!)
+    path: string;
+    path_specified: boolean; true if Path was explicitly set
+    secure:  boolean; true if should only be returned over secure connection
+    expires: integer; seconds since epoch (RFC 2965 cookies should calculate
+     this value from the Max-Age attribute)
+    discard: boolean, true if this is a session cookie; (if no expires value,
+     this should be true)
+    comment: string;
+    comment_url: string;
+    rfc2109: boolean; true if cookie arrived in a Set-Cookie: (not
+     Set-Cookie2:) header, but had a version cookie-attribute of 1
+    rest: mapping of other cookie-attributes
+
+    Note that the port may be present in the headers, but unspecified ("Port"
+    rather than"Port=80", for example); if this is the case, port is None.
+
+    """
+
+    def __init__(self, version, name, value,
+                 port, port_specified,
+                 domain, domain_specified, domain_initial_dot,
+                 path, path_specified,
+                 secure,
+                 expires,
+                 discard,
+                 comment,
+                 comment_url,
+                 rest,
+                 rfc2109=False,
+                 ):
+
+        if version is not None: version = int(version)
+        if expires is not None: expires = int(expires)
+        if port is None and port_specified is True:
+            raise ValueError("if port is None, port_specified must be false")
+
+        self.version = version
+        self.name = name
+        self.value = value
+        self.port = port
+        self.port_specified = port_specified
+        # normalise case, as per RFC 2965 section 3.3.3
+        self.domain = string.lower(domain)
+        self.domain_specified = domain_specified
+        # Sigh.  We need to know whether the domain given in the
+        # cookie-attribute had an initial dot, in order to follow RFC 2965
+        # (as clarified in draft errata).  Needed for the returned $Domain
+        # value.
+        self.domain_initial_dot = domain_initial_dot
+        self.path = path
+        self.path_specified = path_specified
+        self.secure = secure
+        self.expires = expires
+        self.discard = discard
+        self.comment = comment
+        self.comment_url = comment_url
+        self.rfc2109 = rfc2109
+
+        self._rest = copy.copy(rest)
+
+    def has_nonstandard_attr(self, name):
+        return self._rest.has_key(name)
+    def get_nonstandard_attr(self, name, default=None):
+        return self._rest.get(name, default)
+    def set_nonstandard_attr(self, name, value):
+        self._rest[name] = value
+    def nonstandard_attr_keys(self):
+        return self._rest.keys()
+
+    def is_expired(self, now=None):
+        if now is None: now = time.time()
+        return (self.expires is not None) and (self.expires <= now)
+
+    def __str__(self):
+        if self.port is None: p = ""
+        else: p = ":"+self.port
+        limit = self.domain + p + self.path
+        if self.value is not None:
+            namevalue = "%s=%s" % (self.name, self.value)
+        else:
+            namevalue = self.name
+        return "<Cookie %s for %s>" % (namevalue, limit)
+
+    def __repr__(self):
+        args = []
+        for name in ["version", "name", "value",
+                     "port", "port_specified",
+                     "domain", "domain_specified", "domain_initial_dot",
+                     "path", "path_specified",
+                     "secure", "expires", "discard", "comment", "comment_url",
+                     ]:
+            attr = getattr(self, name)
+            args.append("%s=%s" % (name, repr(attr)))
+        args.append("rest=%s" % repr(self._rest))
+        args.append("rfc2109=%s" % repr(self.rfc2109))
+        return "Cookie(%s)" % string.join(args, ", ")
+
+
+class CookiePolicy:
+    """Defines which cookies get accepted from and returned to server.
+
+    May also modify cookies.
+
+    The subclass DefaultCookiePolicy defines the standard rules for Netscape
+    and RFC 2965 cookies -- override that if you want a customised policy.
+
+    As well as implementing set_ok and return_ok, implementations of this
+    interface must also supply the following attributes, indicating which
+    protocols should be used, and how.  These can be read and set at any time,
+    though whether that makes complete sense from the protocol point of view is
+    doubtful.
+
+    Public attributes:
+
+    netscape: implement netscape protocol
+    rfc2965: implement RFC 2965 protocol
+    rfc2109_as_netscape:
+       WARNING: This argument will change or go away if is not accepted into
+                the Python standard library in this form!
+     If true, treat RFC 2109 cookies as though they were Netscape cookies.  The
+     default is for this attribute to be None, which means treat 2109 cookies
+     as RFC 2965 cookies unless RFC 2965 handling is switched off (which it is,
+     by default), and as Netscape cookies otherwise.
+    hide_cookie2: don't add Cookie2 header to requests (the presence of
+     this header indicates to the server that we understand RFC 2965
+     cookies)
+
+    """
+    def set_ok(self, cookie, request):
+        """Return true if (and only if) cookie should be accepted from server.
+
+        Currently, pre-expired cookies never get this far -- the CookieJar
+        class deletes such cookies itself.
+
+        cookie: ClientCookie.Cookie object
+        request: object implementing the interface defined by
+         CookieJar.extract_cookies.__doc__
+
+        """
+        raise NotImplementedError()
+
+    def return_ok(self, cookie, request):
+        """Return true if (and only if) cookie should be returned to server.
+
+        cookie: ClientCookie.Cookie object
+        request: object implementing the interface defined by
+         CookieJar.add_cookie_header.__doc__
+
+        """
+        raise NotImplementedError()
+
+    def domain_return_ok(self, domain, request):
+        """Return false if cookies should not be returned, given cookie domain.
+
+        This is here as an optimization, to remove the need for checking every
+        cookie with a particular domain (which may involve reading many files).
+        The default implementations of domain_return_ok and path_return_ok
+        (return True) leave all the work to return_ok.
+
+        If domain_return_ok returns true for the cookie domain, path_return_ok
+        is called for the cookie path.  Otherwise, path_return_ok and return_ok
+        are never called for that cookie domain.  If path_return_ok returns
+        true, return_ok is called with the Cookie object itself for a full
+        check.  Otherwise, return_ok is never called for that cookie path.
+
+        Note that domain_return_ok is called for every *cookie* domain, not
+        just for the *request* domain.  For example, the function might be
+        called with both ".acme.com" and "www.acme.com" if the request domain is
+        "www.acme.com".  The same goes for path_return_ok.
+
+        For argument documentation, see the docstring for return_ok.
+
+        """
+        return True
+
+    def path_return_ok(self, path, request):
+        """Return false if cookies should not be returned, given cookie path.
+
+        See the docstring for domain_return_ok.
+
+        """
+        return True
+
+
+class DefaultCookiePolicy(CookiePolicy):
+    """Implements the standard rules for accepting and returning cookies.
+
+    Both RFC 2965 and Netscape cookies are covered.  RFC 2965 handling is
+    switched off by default.
+
+    The easiest way to provide your own policy is to override this class and
+    call its methods in your overriden implementations before adding your own
+    additional checks.
+
+    import ClientCookie
+    class MyCookiePolicy(ClientCookie.DefaultCookiePolicy):
+        def set_ok(self, cookie, request):
+            if not ClientCookie.DefaultCookiePolicy.set_ok(
+                self, cookie, request):
+                return False
+            if i_dont_want_to_store_this_cookie():
+                return False
+            return True
+
+    In addition to the features required to implement the CookiePolicy
+    interface, this class allows you to block and allow domains from setting
+    and receiving cookies.  There are also some strictness switches that allow
+    you to tighten up the rather loose Netscape protocol rules a little bit (at
+    the cost of blocking some benign cookies).
+
+    A domain blacklist and whitelist is provided (both off by default).  Only
+    domains not in the blacklist and present in the whitelist (if the whitelist
+    is active) participate in cookie setting and returning.  Use the
+    blocked_domains constructor argument, and blocked_domains and
+    set_blocked_domains methods (and the corresponding argument and methods for
+    allowed_domains).  If you set a whitelist, you can turn it off again by
+    setting it to None.
+
+    Domains in block or allow lists that do not start with a dot must
+    string-compare equal.  For example, "acme.com" matches a blacklist entry of
+    "acme.com", but "www.acme.com" does not.  Domains that do start with a dot
+    are matched by more specific domains too.  For example, both "www.acme.com"
+    and "www.munitions.acme.com" match ".acme.com" (but "acme.com" itself does
+    not).  IP addresses are an exception, and must match exactly.  For example,
+    if blocked_domains contains "192.168.1.2" and ".168.1.2" 192.168.1.2 is
+    blocked, but 193.168.1.2 is not.
+
+    Additional Public Attributes:
+
+    General strictness switches
+
+    strict_domain: don't allow sites to set two-component domains with
+     country-code top-level domains like .co.uk, .gov.uk, .co.nz. etc.
+     This is far from perfect and isn't guaranteed to work!
+
+    RFC 2965 protocol strictness switches
+
+    strict_rfc2965_unverifiable: follow RFC 2965 rules on unverifiable
+     transactions (usually, an unverifiable transaction is one resulting from
+     a redirect or an image hosted on another site); if this is false, cookies
+     are NEVER blocked on the basis of verifiability
+
+    Netscape protocol strictness switches
+
+    strict_ns_unverifiable: apply RFC 2965 rules on unverifiable transactions
+     even to Netscape cookies
+    strict_ns_domain: flags indicating how strict to be with domain-matching
+     rules for Netscape cookies:
+      DomainStrictNoDots: when setting cookies, host prefix must not contain a
+       dot (eg. www.foo.bar.com can't set a cookie for .bar.com, because
+       www.foo contains a dot)
+      DomainStrictNonDomain: cookies that did not explicitly specify a Domain
+       cookie-attribute can only be returned to a domain that string-compares
+       equal to the domain that set the cookie (eg. rockets.acme.com won't
+       be returned cookies from acme.com that had no Domain cookie-attribute)
+      DomainRFC2965Match: when setting cookies, require a full RFC 2965
+       domain-match
+      DomainLiberal and DomainStrict are the most useful combinations of the
+       above flags, for convenience
+    strict_ns_set_initial_dollar: ignore cookies in Set-Cookie: headers that
+     have names starting with '$'
+    strict_ns_set_path: don't allow setting cookies whose path doesn't
+     path-match request URI
+
+    """
+
+    DomainStrictNoDots = 1
+    DomainStrictNonDomain = 2
+    DomainRFC2965Match = 4
+
+    DomainLiberal = 0
+    DomainStrict = DomainStrictNoDots|DomainStrictNonDomain
+
+    def __init__(self,
+                 blocked_domains=None, allowed_domains=None,
+                 netscape=True, rfc2965=False,
+                 # WARNING: this argument will change or go away if is not
+                 # accepted into the Python standard library in this form!
+                 # default, ie. treat 2109 as netscape iff not rfc2965
+                 rfc2109_as_netscape=None,
+                 hide_cookie2=False,
+                 strict_domain=False,
+                 strict_rfc2965_unverifiable=True,
+                 strict_ns_unverifiable=False,
+                 strict_ns_domain=DomainLiberal,
+                 strict_ns_set_initial_dollar=False,
+                 strict_ns_set_path=False,
+                 ):
+        """
+        Constructor arguments should be used as keyword arguments only.
+
+        blocked_domains: sequence of domain names that we never accept cookies
+         from, nor return cookies to
+        allowed_domains: if not None, this is a sequence of the only domains
+         for which we accept and return cookies
+
+        For other arguments, see CookiePolicy.__doc__ and
+        DefaultCookiePolicy.__doc__..
+
+        """
+        self.netscape = netscape
+        self.rfc2965 = rfc2965
+        self.rfc2109_as_netscape = rfc2109_as_netscape
+        self.hide_cookie2 = hide_cookie2
+        self.strict_domain = strict_domain
+        self.strict_rfc2965_unverifiable = strict_rfc2965_unverifiable
+        self.strict_ns_unverifiable = strict_ns_unverifiable
+        self.strict_ns_domain = strict_ns_domain
+        self.strict_ns_set_initial_dollar = strict_ns_set_initial_dollar
+        self.strict_ns_set_path = strict_ns_set_path
+
+        if blocked_domains is not None:
+            self._blocked_domains = tuple(blocked_domains)
+        else:
+            self._blocked_domains = ()
+
+        if allowed_domains is not None:
+            allowed_domains = tuple(allowed_domains)
+        self._allowed_domains = allowed_domains
+
+    def blocked_domains(self):
+        """Return the sequence of blocked domains (as a tuple)."""
+        return self._blocked_domains
+    def set_blocked_domains(self, blocked_domains):
+        """Set the sequence of blocked domains."""
+        self._blocked_domains = tuple(blocked_domains)
+
+    def is_blocked(self, domain):
+        for blocked_domain in self._blocked_domains:
+            if user_domain_match(domain, blocked_domain):
+                return True
+        return False
+
+    def allowed_domains(self):
+        """Return None, or the sequence of allowed domains (as a tuple)."""
+        return self._allowed_domains
+    def set_allowed_domains(self, allowed_domains):
+        """Set the sequence of allowed domains, or None."""
+        if allowed_domains is not None:
+            allowed_domains = tuple(allowed_domains)
+        self._allowed_domains = allowed_domains
+
+    def is_not_allowed(self, domain):
+        if self._allowed_domains is None:
+            return False
+        for allowed_domain in self._allowed_domains:
+            if user_domain_match(domain, allowed_domain):
+                return False
+        return True
+
+    def set_ok(self, cookie, request):
+        """
+        If you override set_ok, be sure to call this method.  If it returns
+        false, so should your subclass (assuming your subclass wants to be more
+        strict about which cookies to accept).
+
+        """
+        debug(" - checking cookie %s", cookie)
+
+        assert cookie.name is not None
+
+        for n in "version", "verifiability", "name", "path", "domain", "port":
+            fn_name = "set_ok_"+n
+            fn = getattr(self, fn_name)
+            if not fn(cookie, request):
+                return False
+
+        return True
+
+    def set_ok_version(self, cookie, request):
+        if cookie.version is None:
+            # Version is always set to 0 by parse_ns_headers if it's a Netscape
+            # cookie, so this must be an invalid RFC 2965 cookie.
+            debug("   Set-Cookie2 without version attribute (%s)", cookie)
+            return False
+        if cookie.version > 0 and not self.rfc2965:
+            debug("   RFC 2965 cookies are switched off")
+            return False
+        elif cookie.version == 0 and not self.netscape:
+            debug("   Netscape cookies are switched off")
+            return False
+        return True
+
+    def set_ok_verifiability(self, cookie, request):
+        if request.unverifiable and is_third_party(request):
+            if cookie.version > 0 and self.strict_rfc2965_unverifiable:
+                debug("   third-party RFC 2965 cookie during "
+                             "unverifiable transaction")
+                return False
+            elif cookie.version == 0 and self.strict_ns_unverifiable:
+                debug("   third-party Netscape cookie during "
+                             "unverifiable transaction")
+                return False
+        return True
+
+    def set_ok_name(self, cookie, request):
+        # Try and stop servers setting V0 cookies designed to hack other
+        # servers that know both V0 and V1 protocols.
+        if (cookie.version == 0 and self.strict_ns_set_initial_dollar and
+            startswith(cookie.name, "$")):
+            debug("   illegal name (starts with '$'): '%s'", cookie.name)
+            return False
+        return True
+
+    def set_ok_path(self, cookie, request):
+        if cookie.path_specified:
+            req_path = request_path(request)
+            if ((cookie.version > 0 or
+                 (cookie.version == 0 and self.strict_ns_set_path)) and
+                not startswith(req_path, cookie.path)):
+                debug("   path attribute %s is not a prefix of request "
+                      "path %s", cookie.path, req_path)
+                return False
+        return True
+
+    def set_ok_countrycode_domain(self, cookie, request):
+        """Return False if explicit cookie domain is not acceptable.
+
+        Called by set_ok_domain, for convenience of overriding by
+        subclasses.
+
+        """
+        if cookie.domain_specified and self.strict_domain:
+            domain = cookie.domain
+            # since domain was specified, we know that:
+            assert domain.startswith(".")
+            if string.count(domain, ".") == 2:
+                # domain like .foo.bar
+                i = string.rfind(domain, ".")
+                tld = domain[i+1:]
+                sld = domain[1:i]
+                if (string.lower(sld) in [
+                    "co", "ac",
+                    "com", "edu", "org", "net", "gov", "mil", "int"] and
+                    len(tld) == 2):
+                    # domain like .co.uk
+                    return False
+        return True
+
+    def set_ok_domain(self, cookie, request):
+        if self.is_blocked(cookie.domain):
+            debug("   domain %s is in user block-list", cookie.domain)
+            return False
+        if self.is_not_allowed(cookie.domain):
+            debug("   domain %s is not in user allow-list", cookie.domain)
+            return False
+        if not self.set_ok_countrycode_domain(cookie, request):
+            debug("   country-code second level domain %s", cookie.domain)
+            return False
+        if cookie.domain_specified:
+            req_host, erhn = eff_request_host(request)
+            domain = cookie.domain
+            if startswith(domain, "."):
+                undotted_domain = domain[1:]
+            else:
+                undotted_domain = domain
+            embedded_dots = (string.find(undotted_domain, ".") >= 0)
+            if not embedded_dots and domain != ".local":
+                debug("   non-local domain %s contains no embedded dot",
+                      domain)
+                return False
+            if cookie.version == 0:
+                if (not endswith(erhn, domain) and
+                    (not startswith(erhn, ".") and
+                     not endswith("."+erhn, domain))):
+                    debug("   effective request-host %s (even with added "
+                          "initial dot) does not end end with %s",
+                          erhn, domain)
+                    return False
+            if (cookie.version > 0 or
+                (self.strict_ns_domain & self.DomainRFC2965Match)):
+                if not domain_match(erhn, domain):
+                    debug("   effective request-host %s does not domain-match "
+                          "%s", erhn, domain)
+                    return False
+            if (cookie.version > 0 or
+                (self.strict_ns_domain & self.DomainStrictNoDots)):
+                host_prefix = req_host[:-len(domain)]
+                if (string.find(host_prefix, ".") >= 0 and
+                    not IPV4_RE.search(req_host)):
+                    debug("   host prefix %s for domain %s contains a dot",
+                          host_prefix, domain)
+                    return False
+        return True
+
+    def set_ok_port(self, cookie, request):
+        if cookie.port_specified:
+            req_port = request_port(request)
+            if req_port is None:
+                req_port = "80"
+            else:
+                req_port = str(req_port)
+            for p in string.split(cookie.port, ","):
+                try:
+                    int(p)
+                except ValueError:
+                    debug("   bad port %s (not numeric)", p)
+                    return False
+                if p == req_port:
+                    break
+            else:
+                debug("   request port (%s) not found in %s",
+                      req_port, cookie.port)
+                return False
+        return True
+
+    def return_ok(self, cookie, request):
+        """
+        If you override return_ok, be sure to call this method.  If it returns
+        false, so should your subclass (assuming your subclass wants to be more
+        strict about which cookies to return).
+
+        """
+        # Path has already been checked by path_return_ok, and domain blocking
+        # done by domain_return_ok.
+        debug(" - checking cookie %s", cookie)
+
+        for n in "version", "verifiability", "secure", "expires", "port", "domain":
+            fn_name = "return_ok_"+n
+            fn = getattr(self, fn_name)
+            if not fn(cookie, request):
+                return False
+        return True
+
+    def return_ok_version(self, cookie, request):
+        if cookie.version > 0 and not self.rfc2965:
+            debug("   RFC 2965 cookies are switched off")
+            return False
+        elif cookie.version == 0 and not self.netscape:
+            debug("   Netscape cookies are switched off")
+            return False
+        return True
+
+    def return_ok_verifiability(self, cookie, request):
+        if request.unverifiable and is_third_party(request):
+            if cookie.version > 0 and self.strict_rfc2965_unverifiable:
+                debug("   third-party RFC 2965 cookie during unverifiable "
+                      "transaction")
+                return False
+            elif cookie.version == 0 and self.strict_ns_unverifiable:
+                debug("   third-party Netscape cookie during unverifiable "
+                      "transaction")
+                return False
+        return True
+
+    def return_ok_secure(self, cookie, request):
+        if cookie.secure and request.get_type() != "https":
+            debug("   secure cookie with non-secure request")
+            return False
+        return True
+
+    def return_ok_expires(self, cookie, request):
+        if cookie.is_expired(self._now):
+            debug("   cookie expired")
+            return False
+        return True
+
+    def return_ok_port(self, cookie, request):
+        if cookie.port:
+            req_port = request_port(request)
+            if req_port is None:
+                req_port = "80"
+            for p in string.split(cookie.port, ","):
+                if p == req_port:
+                    break
+            else:
+                debug("   request port %s does not match cookie port %s",
+                      req_port, cookie.port)
+                return False
+        return True
+
+    def return_ok_domain(self, cookie, request):
+        req_host, erhn = eff_request_host(request)
+        domain = cookie.domain
+
+        # strict check of non-domain cookies: Mozilla does this, MSIE5 doesn't
+        if (cookie.version == 0 and
+            (self.strict_ns_domain & self.DomainStrictNonDomain) and
+            not cookie.domain_specified and domain != erhn):
+            debug("   cookie with unspecified domain does not string-compare "
+                  "equal to request domain")
+            return False
+
+        if cookie.version > 0 and not domain_match(erhn, domain):
+            debug("   effective request-host name %s does not domain-match "
+                  "RFC 2965 cookie domain %s", erhn, domain)
+            return False
+        if cookie.version == 0 and not endswith("."+erhn, domain):
+            debug("   request-host %s does not match Netscape cookie domain "
+                  "%s", req_host, domain)
+            return False
+        return True
+
+    def domain_return_ok(self, domain, request):
+        # Liberal check of domain.  This is here as an optimization to avoid
+        # having to load lots of MSIE cookie files unless necessary.
+
+        # Munge req_host and erhn to always start with a dot, so as to err on
+        # the side of letting cookies through.
+        dotted_req_host, dotted_erhn = eff_request_host(request)
+        if not startswith(dotted_req_host, "."):
+            dotted_req_host = "."+dotted_req_host
+        if not startswith(dotted_erhn, "."):
+            dotted_erhn = "."+dotted_erhn
+        if not (endswith(dotted_req_host, domain) or
+                endswith(dotted_erhn, domain)):
+            #debug("   request domain %s does not match cookie domain %s",
+            #      req_host, domain)
+            return False
+
+        if self.is_blocked(domain):
+            debug("   domain %s is in user block-list", domain)
+            return False
+        if self.is_not_allowed(domain):
+            debug("   domain %s is not in user allow-list", domain)
+            return False
+
+        return True
+
+    def path_return_ok(self, path, request):
+        debug("- checking cookie path=%s", path)
+        req_path = request_path(request)
+        if not startswith(req_path, path):
+            debug("  %s does not path-match %s", req_path, path)
+            return False
+        return True
+
+
+def vals_sorted_by_key(adict):
+    keys = adict.keys()
+    keys.sort()
+    return map(adict.get, keys)
+
+class MappingIterator:
+    """Iterates over nested mapping, depth-first, in sorted order by key."""
+    def __init__(self, mapping):
+        self._s = [(vals_sorted_by_key(mapping), 0, None)]  # LIFO stack
+
+    def __iter__(self): return self
+
+    def next(self):
+        # this is hairy because of lack of generators
+        while 1:
+            try:
+                vals, i, prev_item = self._s.pop()
+            except IndexError:
+                raise StopIteration()
+            if i < len(vals):
+                item = vals[i]
+                i = i + 1
+                self._s.append((vals, i, prev_item))
+                try:
+                    item.items
+                except AttributeError:
+                    # non-mapping
+                    break
+                else:
+                    # mapping
+                    self._s.append((vals_sorted_by_key(item), 0, item))
+                    continue
+        return item
+
+
+# Used as second parameter to dict.get method, to distinguish absent
+# dict key from one with a None value.
+class Absent: pass
+
+class CookieJar:
+    """Collection of HTTP cookies.
+
+    You may not need to know about this class: try ClientCookie.urlopen().
+
+    The major methods are extract_cookies and add_cookie_header; these are all
+    you are likely to need.
+
+    CookieJar supports the iterator protocol:
+
+    for cookie in cookiejar:
+        # do something with cookie
+
+    Methods:
+
+    add_cookie_header(request)
+    extract_cookies(response, request)
+    make_cookies(response, request)
+    set_cookie_if_ok(cookie, request)
+    set_cookie(cookie)
+    clear_session_cookies()
+    clear_expired_cookies()
+    clear(domain=None, path=None, name=None)
+
+    Public attributes
+
+    policy: CookiePolicy object
+
+    """
+
+    non_word_re = re.compile(r"\W")
+    quote_re = re.compile(r"([\"\\])")
+    strict_domain_re = re.compile(r"\.?[^.]*")
+    domain_re = re.compile(r"[^.]*")
+    dots_re = re.compile(r"^\.+")
+
+    def __init__(self, policy=None):
+        """
+        See CookieJar.__doc__ for argument documentation.
+
+        """
+        if policy is None:
+            policy = DefaultCookiePolicy()
+        self._policy = policy
+
+        self._cookies = {}
+
+        # for __getitem__ iteration in pre-2.2 Pythons
+        self._prev_getitem_index = 0
+
+    def set_policy(self, policy):
+        self._policy = policy
+
+    def _cookies_for_domain(self, domain, request):
+        cookies = []
+        if not self._policy.domain_return_ok(domain, request):
+            return []
+        debug("Checking %s for cookies to return", domain)
+        cookies_by_path = self._cookies[domain]
+        for path in cookies_by_path.keys():
+            if not self._policy.path_return_ok(path, request):
+                continue
+            cookies_by_name = cookies_by_path[path]
+            for cookie in cookies_by_name.values():
+                if not self._policy.return_ok(cookie, request):
+                    debug("   not returning cookie")
+                    continue
+                debug("   it's a match")
+                cookies.append(cookie)
+        return cookies
+
+    def _cookies_for_request(self, request):
+        """Return a list of cookies to be returned to server."""
+        cookies = []
+        for domain in self._cookies.keys():
+            cookies.extend(self._cookies_for_domain(domain, request))
+        return cookies
+
+    def _cookie_attrs(self, cookies):
+        """Return a list of cookie-attributes to be returned to server.
+
+        like ['foo="bar"; $Path="/"', ...]
+
+        The $Version attribute is also added when appropriate (currently only
+        once per request).
+
+        """
+        # add cookies in order of most specific (ie. longest) path first
+        def decreasing_size(a, b): return cmp(len(b.path), len(a.path))
+        cookies.sort(decreasing_size)
+
+        version_set = False
+
+        attrs = []
+        for cookie in cookies:
+            # set version of Cookie header
+            # XXX
+            # What should it be if multiple matching Set-Cookie headers have
+            #  different versions themselves?
+            # Answer: there is no answer; was supposed to be settled by
+            #  RFC 2965 errata, but that may never appear...
+            version = cookie.version
+            if not version_set:
+                version_set = True
+                if version > 0:
+                    attrs.append("$Version=%s" % version)
+
+            # quote cookie value if necessary
+            # (not for Netscape protocol, which already has any quotes
+            #  intact, due to the poorly-specified Netscape Cookie: syntax)
+            if ((cookie.value is not None) and
+                self.non_word_re.search(cookie.value) and version > 0):
+                value = self.quote_re.sub(r"\\\1", cookie.value)
+            else:
+                value = cookie.value
+
+            # add cookie-attributes to be returned in Cookie header
+            if cookie.value is None:
+                attrs.append(cookie.name)
+            else:
+                attrs.append("%s=%s" % (cookie.name, value))
+            if version > 0:
+                if cookie.path_specified:
+                    attrs.append('$Path="%s"' % cookie.path)
+                if startswith(cookie.domain, "."):
+                    domain = cookie.domain
+                    if (not cookie.domain_initial_dot and
+                        startswith(domain, ".")):
+                        domain = domain[1:]
+                    attrs.append('$Domain="%s"' % domain)
+                if cookie.port is not None:
+                    p = "$Port"
+                    if cookie.port_specified:
+                        p = p + ('="%s"' % cookie.port)
+                    attrs.append(p)
+
+        return attrs
+
+    def add_cookie_header(self, request):
+        """Add correct Cookie: header to request (urllib2.Request object).
+
+        The Cookie2 header is also added unless policy.hide_cookie2 is true.
+
+        The request object (usually a urllib2.Request instance) must support
+        the methods get_full_url, get_host, get_type, has_header, get_header,
+        header_items and add_unredirected_header, as documented by urllib2, and
+        the port attribute (the port number).  Actually,
+        RequestUpgradeProcessor will automatically upgrade your Request object
+        to one with has_header, get_header, header_items and
+        add_unredirected_header, if it lacks those methods, for compatibility
+        with pre-2.4 versions of urllib2.
+
+        """
+        debug("add_cookie_header")
+        self._policy._now = self._now = int(time.time())
+
+        req_host, erhn = eff_request_host(request)
+        strict_non_domain = (
+            self._policy.strict_ns_domain & self._policy.DomainStrictNonDomain)
+
+        cookies = self._cookies_for_request(request)
+
+        attrs = self._cookie_attrs(cookies)
+        if attrs:
+            if not request.has_header("Cookie"):
+                request.add_unredirected_header(
+                    "Cookie", string.join(attrs, "; "))
+
+        # if necessary, advertise that we know RFC 2965
+        if self._policy.rfc2965 and not self._policy.hide_cookie2:
+            for cookie in cookies:
+                if cookie.version != 1 and not request.has_header("Cookie2"):
+                    request.add_unredirected_header("Cookie2", '$Version="1"')
+                    break
+
+        self.clear_expired_cookies()
+
+    def _normalized_cookie_tuples(self, attrs_set):
+        """Return list of tuples containing normalised cookie information.
+
+        attrs_set is the list of lists of key,value pairs extracted from
+        the Set-Cookie or Set-Cookie2 headers.
+
+        Tuples are name, value, standard, rest, where name and value are the
+        cookie name and value, standard is a dictionary containing the standard
+        cookie-attributes (discard, secure, version, expires or max-age,
+        domain, path and port) and rest is a dictionary containing the rest of
+        the cookie-attributes.
+
+        """
+        cookie_tuples = []
+
+        boolean_attrs = "discard", "secure"
+        value_attrs = ("version",
+                       "expires", "max-age",
+                       "domain", "path", "port",
+                       "comment", "commenturl")
+
+        for cookie_attrs in attrs_set:
+            name, value = cookie_attrs[0]
+
+            # Build dictionary of standard cookie-attributes (standard) and
+            # dictionary of other cookie-attributes (rest).
+
+            # Note: expiry time is normalised to seconds since epoch.  V0
+            # cookies should have the Expires cookie-attribute, and V1 cookies
+            # should have Max-Age, but since V1 includes RFC 2109 cookies (and
+            # since V0 cookies may be a mish-mash of Netscape and RFC 2109), we
+            # accept either (but prefer Max-Age).
+            max_age_set = False
+
+            bad_cookie = False
+
+            standard = {}
+            rest = {}
+            for k, v in cookie_attrs[1:]:
+                lc = string.lower(k)
+                # don't lose case distinction for unknown fields
+                if lc in value_attrs or lc in boolean_attrs:
+                    k = lc
+                if k in boolean_attrs and v is None:
+                    # boolean cookie-attribute is present, but has no value
+                    # (like "discard", rather than "port=80")
+                    v = True
+                if standard.has_key(k):
+                    # only first value is significant
+                    continue
+                if k == "domain":
+                    if v is None:
+                        debug("   missing value for domain attribute")
+                        bad_cookie = True
+                        break
+                    # RFC 2965 section 3.3.3
+                    v = string.lower(v)
+                if k == "expires":
+                    if max_age_set:
+                        # Prefer max-age to expires (like Mozilla)
+                        continue
+                    if v is None:
+                        debug("   missing or invalid value for expires "
+                              "attribute: treating as session cookie")
+                        continue
+                if k == "max-age":
+                    max_age_set = True
+                    try:
+                        v = int(v)
+                    except ValueError:
+                        debug("   missing or invalid (non-numeric) value for "
+                              "max-age attribute")
+                        bad_cookie = True
+                        break
+                    # convert RFC 2965 Max-Age to seconds since epoch
+                    # XXX Strictly you're supposed to follow RFC 2616
+                    #   age-calculation rules.  Remember that zero Max-Age is a
+                    #   is a request to discard (old and new) cookie, though.
+                    k = "expires"
+                    v = self._now + v
+                if (k in value_attrs) or (k in boolean_attrs):
+                    if (v is None and
+                        k not in ["port", "comment", "commenturl"]):
+                        debug("   missing value for %s attribute" % k)
+                        bad_cookie = True
+                        break
+                    standard[k] = v
+                else:
+                    rest[k] = v
+
+            if bad_cookie:
+                continue
+
+            cookie_tuples.append((name, value, standard, rest))
+
+        return cookie_tuples
+
+    def _cookie_from_cookie_tuple(self, tup, request):
+        # standard is dict of standard cookie-attributes, rest is dict of the
+        # rest of them
+        name, value, standard, rest = tup
+
+        domain = standard.get("domain", Absent)
+        path = standard.get("path", Absent)
+        port = standard.get("port", Absent)
+        expires = standard.get("expires", Absent)
+
+        # set the easy defaults
+        version = standard.get("version", None)
+        if version is not None: version = int(version)
+        secure = standard.get("secure", False)
+        # (discard is also set if expires is Absent)
+        discard = standard.get("discard", False)
+        comment = standard.get("comment", None)
+        comment_url = standard.get("commenturl", None)
+
+        # set default path
+        if path is not Absent and path != "":
+            path_specified = True
+            path = escape_path(path)
+        else:
+            path_specified = False
+            path = request_path(request)
+            i = string.rfind(path, "/")
+            if i != -1:
+                if version == 0:
+                    # Netscape spec parts company from reality here
+                    path = path[:i]
+                else:
+                    path = path[:i+1]
+            if len(path) == 0: path = "/"
+
+        # set default domain
+        domain_specified = domain is not Absent
+        # but first we have to remember whether it starts with a dot
+        domain_initial_dot = False
+        if domain_specified:
+            domain_initial_dot = bool(startswith(domain, "."))
+        if domain is Absent:
+            req_host, erhn = eff_request_host(request)
+            domain = erhn
+        elif not startswith(domain, "."):
+            domain = "."+domain
+
+        # set default port
+        port_specified = False
+        if port is not Absent:
+            if port is None:
+                # Port attr present, but has no value: default to request port.
+                # Cookie should then only be sent back on that port.
+                port = request_port(request)
+            else:
+                port_specified = True
+                port = re.sub(r"\s+", "", port)
+        else:
+            # No port attr present.  Cookie can be sent back on any port.
+            port = None
+
+        # set default expires and discard
+        if expires is Absent:
+            expires = None
+            discard = True
+        elif expires <= self._now:
+            # Expiry date in past is request to delete cookie.  This can't be
+            # in DefaultCookiePolicy, because can't delete cookies there.
+            try:
+                self.clear(domain, path, name)
+            except KeyError:
+                pass
+            debug("Expiring cookie, domain='%s', path='%s', name='%s'",
+                  domain, path, name)
+            return None
+
+        return Cookie(version,
+                      name, value,
+                      port, port_specified,
+                      domain, domain_specified, domain_initial_dot,
+                      path, path_specified,
+                      secure,
+                      expires,
+                      discard,
+                      comment,
+                      comment_url,
+                      rest)
+
+    def _cookies_from_attrs_set(self, attrs_set, request):
+        cookie_tuples = self._normalized_cookie_tuples(attrs_set)
+
+        cookies = []
+        for tup in cookie_tuples:
+            cookie = self._cookie_from_cookie_tuple(tup, request)
+            if cookie: cookies.append(cookie)
+        return cookies
+
+    def _process_rfc2109_cookies(self, cookies):
+        if self._policy.rfc2109_as_netscape is None:
+            rfc2109_as_netscape = not self._policy.rfc2965
+        else:
+            rfc2109_as_netscape = self._policy.rfc2109_as_netscape
+        for cookie in cookies:
+            if cookie.version == 1:
+                cookie.rfc2109 = True
+                if rfc2109_as_netscape: 
+                    # treat 2109 cookies as Netscape cookies rather than
+                    # as RFC2965 cookies
+                    cookie.version = 0
+
+    def make_cookies(self, response, request):
+        """Return sequence of Cookie objects extracted from response object.
+
+        See extract_cookies.__doc__ for the interfaces required of the
+        response and request arguments.
+
+        """
+        # get cookie-attributes for RFC 2965 and Netscape protocols
+        headers = response.info()
+        rfc2965_hdrs = getheaders(headers, "Set-Cookie2")
+        ns_hdrs = getheaders(headers, "Set-Cookie")
+
+        rfc2965 = self._policy.rfc2965
+        netscape = self._policy.netscape
+
+        if ((not rfc2965_hdrs and not ns_hdrs) or
+            (not ns_hdrs and not rfc2965) or
+            (not rfc2965_hdrs and not netscape) or
+            (not netscape and not rfc2965)):
+            return []  # no relevant cookie headers: quick exit
+
+        try:
+            cookies = self._cookies_from_attrs_set(
+                split_header_words(rfc2965_hdrs), request)
+        except:
+            reraise_unmasked_exceptions()
+            cookies = []
+
+        if ns_hdrs and netscape:
+            try:
+                # RFC 2109 and Netscape cookies
+                ns_cookies = self._cookies_from_attrs_set(
+                    parse_ns_headers(ns_hdrs), request)
+            except:
+                reraise_unmasked_exceptions()
+                ns_cookies = []
+            self._process_rfc2109_cookies(ns_cookies)
+
+            # Look for Netscape cookies (from Set-Cookie headers) that match
+            # corresponding RFC 2965 cookies (from Set-Cookie2 headers).
+            # For each match, keep the RFC 2965 cookie and ignore the Netscape
+            # cookie (RFC 2965 section 9.1).  Actually, RFC 2109 cookies are
+            # bundled in with the Netscape cookies for this purpose, which is
+            # reasonable behaviour.
+            if rfc2965:
+                lookup = {}
+                for cookie in cookies:
+                    lookup[(cookie.domain, cookie.path, cookie.name)] = None
+
+                def no_matching_rfc2965(ns_cookie, lookup=lookup):
+                    key = ns_cookie.domain, ns_cookie.path, ns_cookie.name
+                    return not lookup.has_key(key)
+                ns_cookies = filter(no_matching_rfc2965, ns_cookies)
+
+            if ns_cookies:
+                cookies.extend(ns_cookies)
+
+        return cookies
+
+    def set_cookie_if_ok(self, cookie, request):
+        """Set a cookie if policy says it's OK to do so.
+
+        cookie: ClientCookie.Cookie instance
+        request: see extract_cookies.__doc__ for the required interface
+
+        """
+        self._policy._now = self._now = int(time.time())
+
+        if self._policy.set_ok(cookie, request):
+            self.set_cookie(cookie)
+
+    def set_cookie(self, cookie):
+        """Set a cookie, without checking whether or not it should be set.
+
+        cookie: ClientCookie.Cookie instance
+        """
+        c = self._cookies
+        if not c.has_key(cookie.domain): c[cookie.domain] = {}
+        c2 = c[cookie.domain]
+        if not c2.has_key(cookie.path): c2[cookie.path] = {}
+        c3 = c2[cookie.path]
+        c3[cookie.name] = cookie
+
+    def extract_cookies(self, response, request):
+        """Extract cookies from response, where allowable given the request.
+
+        Look for allowable Set-Cookie: and Set-Cookie2: headers in the response
+        object passed as argument.  Any of these headers that are found are
+        used to update the state of the object (subject to the policy.set_ok
+        method's approval).
+
+        The response object (usually be the result of a call to
+        ClientCookie.urlopen, or similar) should support an info method, which
+        returns a mimetools.Message object (in fact, the 'mimetools.Message
+        object' may be any object that provides a getallmatchingheaders
+        method).
+
+        The request object (usually a urllib2.Request instance) must support
+        the methods get_full_url and get_host, as documented by urllib2, and
+        the port attribute (the port number).  The request is used to set
+        default values for cookie-attributes as well as for checking that the
+        cookie is OK to be set.
+
+        """
+        debug("extract_cookies: %s", response.info())
+        self._policy._now = self._now = int(time.time())
+
+        for cookie in self.make_cookies(response, request):
+            if self._policy.set_ok(cookie, request):
+                debug(" setting cookie: %s", cookie)
+                self.set_cookie(cookie)
+
+    def clear(self, domain=None, path=None, name=None):
+        """Clear some cookies.
+
+        Invoking this method without arguments will clear all cookies.  If
+        given a single argument, only cookies belonging to that domain will be
+        removed.  If given two arguments, cookies belonging to the specified
+        path within that domain are removed.  If given three arguments, then
+        the cookie with the specified name, path and domain is removed.
+
+        Raises KeyError if no matching cookie exists.
+
+        """
+        if name is not None:
+            if (domain is None) or (path is None):
+                raise ValueError(
+                    "domain and path must be given to remove a cookie by name")
+            del self._cookies[domain][path][name]
+        elif path is not None:
+            if domain is None:
+                raise ValueError(
+                    "domain must be given to remove cookies by path")
+            del self._cookies[domain][path]
+        elif domain is not None:
+            del self._cookies[domain]
+        else:
+            self._cookies = {}
+
+    def clear_session_cookies(self):
+        """Discard all session cookies.
+
+        Discards all cookies held by object which had either no Max-Age or
+        Expires cookie-attribute or an explicit Discard cookie-attribute, or
+        which otherwise have ended up with a true discard attribute.  For
+        interactive browsers, the end of a session usually corresponds to
+        closing the browser window.
+
+        Note that the save method won't save session cookies anyway, unless you
+        ask otherwise by passing a true ignore_discard argument.
+
+        """
+        for cookie in self:
+            if cookie.discard:
+                self.clear(cookie.domain, cookie.path, cookie.name)
+
+    def clear_expired_cookies(self):
+        """Discard all expired cookies.
+
+        You probably don't need to call this method: expired cookies are never
+        sent back to the server (provided you're using DefaultCookiePolicy),
+        this method is called by CookieJar itself every so often, and the save
+        method won't save expired cookies anyway (unless you ask otherwise by
+        passing a true ignore_expires argument).
+
+        """
+        now = time.time()
+        for cookie in self:
+            if cookie.is_expired(now):
+                self.clear(cookie.domain, cookie.path, cookie.name)
+
+    def __getitem__(self, i):
+        if i == 0:
+            self._getitem_iterator = self.__iter__()
+        elif self._prev_getitem_index != i-1: raise IndexError(
+            "CookieJar.__getitem__ only supports sequential iteration")
+        self._prev_getitem_index = i
+        try:
+            return self._getitem_iterator.next()
+        except StopIteration:
+            raise IndexError()
+
+    def __iter__(self):
+        return MappingIterator(self._cookies)
+
+    def __len__(self):
+        """Return number of contained cookies."""
+        i = 0
+        for cookie in self: i = i + 1
+        return i
+
+    def __repr__(self):
+        r = []
+        for cookie in self: r.append(repr(cookie))
+        return "<%s[%s]>" % (self.__class__, string.join(r, ", "))
+
+    def __str__(self):
+        r = []
+        for cookie in self: r.append(str(cookie))
+        return "<%s[%s]>" % (self.__class__, string.join(r, ", "))
+
+
+class LoadError(Exception): pass
+
+class FileCookieJar(CookieJar):
+    """CookieJar that can be loaded from and saved to a file.
+
+    Additional methods
+
+    save(filename=None, ignore_discard=False, ignore_expires=False)
+    load(filename=None, ignore_discard=False, ignore_expires=False)
+    revert(filename=None, ignore_discard=False, ignore_expires=False)
+
+    Additional public attributes
+
+    filename: filename for loading and saving cookies
+
+    Additional public readable attributes
+
+    delayload: request that cookies are lazily loaded from disk; this is only
+     a hint since this only affects performance, not behaviour (unless the
+     cookies on disk are changing); a CookieJar object may ignore it (in fact,
+     only MSIECookieJar lazily loads cookies at the moment)
+
+    """
+
+    def __init__(self, filename=None, delayload=False, policy=None):
+        """
+        See FileCookieJar.__doc__ for argument documentation.
+
+        Cookies are NOT loaded from the named file until either the load or
+        revert method is called.
+
+        """
+        CookieJar.__init__(self, policy)
+        if filename is not None and not isstringlike(filename):
+            raise ValueError("filename must be string-like")
+        self.filename = filename
+        self.delayload = bool(delayload)
+
+    def save(self, filename=None, ignore_discard=False, ignore_expires=False):
+        """Save cookies to a file.
+
+        filename: name of file in which to save cookies
+        ignore_discard: save even cookies set to be discarded
+        ignore_expires: save even cookies that have expired
+
+        The file is overwritten if it already exists, thus wiping all its
+        cookies.  Saved cookies can be restored later using the load or revert
+        methods.  If filename is not specified, self.filename is used; if
+        self.filename is None, ValueError is raised.
+
+        """
+        raise NotImplementedError()
+
+    def load(self, filename=None, ignore_discard=False, ignore_expires=False):
+        """Load cookies from a file.
+
+        Old cookies are kept unless overwritten by newly loaded ones.
+
+        Arguments are as for .save().
+
+        If filename is not specified, self.filename is used; if self.filename
+        is None, ValueError is raised.  The named file must be in the format
+        understood by the class, or LoadError will be raised.  This format will
+        be identical to that written by the save method, unless the load format
+        is not sufficiently well understood (as is the case for MSIECookieJar).
+
+        """
+        if filename is None:
+            if self.filename is not None: filename = self.filename
+            else: raise ValueError(MISSING_FILENAME_TEXT)
+
+        f = open(filename)
+        try:
+            self._really_load(f, filename, ignore_discard, ignore_expires)
+        finally:
+            f.close()
+
+    def revert(self, filename=None,
+               ignore_discard=False, ignore_expires=False):
+        """Clear all cookies and reload cookies from a saved file.
+
+        Raises LoadError (or IOError) if reversion is not successful; the
+        object's state will not be altered if this happens.
+
+        """
+        if filename is None:
+            if self.filename is not None: filename = self.filename
+            else: raise ValueError(MISSING_FILENAME_TEXT)
+
+        old_state = copy.deepcopy(self._cookies)
+        self._cookies = {}
+        try:
+            self.load(filename, ignore_discard, ignore_expires)
+        except (LoadError, IOError):
+            self._cookies = old_state
+            raise

Added: trunk/bigboard/libgmail/ClientCookie/_ConnCache.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/_ConnCache.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,244 @@
+"""Generic connection cache manager.
+
+WARNING: THIS MODULE IS UNUSED AND UNTESTED!
+
+Example:
+
+ from ClientCookie import ConnectionCache
+ cache = ConnectionCache()
+ cache.deposit("http", "example.com", conn)
+ conn = cache.withdraw("http", "example.com")
+
+
+The ConnectionCache class provides cache expiration.
+
+
+Copyright (C) 2004-2006 John J Lee <jjl pobox com>.
+Copyright (C) 2001 Gisle Aas.
+
+This code is free software; you can redistribute it and/or modify it
+under the terms of the BSD or ZPL 2.1 licenses (see the file
+COPYING.txt included with the distribution).
+
+"""
+
+# Ported from libwww-perl 5.75.
+
+import time
+try:
+    from types import StringTypes
+except ImportError:
+    from types import StringType
+    StringTypes = StringType
+
+from _Util import compat_isinstance
+from _Debug import getLogger, warn
+debug = getLogger("ClientCookie").debug
+
+warn("WARNING: MODULE _ConnCache IS UNUSED AND UNTESTED!")
+
+
+class _ConnectionRecord:
+    def __init__(self, conn, scheme, key, time):
+        self.conn, self.scheme, self.key, self.time = conn, scheme, key, time
+    def __repr__(self):
+        return "%s(%s, %s, %s, %s)" % (
+            self.__class__.__name__,
+            self.conn, self.scheme, self.key, self.time)
+
+class ConnectionCache:
+    """
+    For specialized cache policy it makes sense to subclass ConnectionCache and
+    perhaps override the .deposit(), ._enforce_limits() and ._dropping()
+    methods.
+
+    """
+    def __init__(self, total_capacity=1):
+        self._limit = {}
+        self.total_capacity(total_capacity)
+
+    def set_total_capacity(self, nr_connections):
+        """Set limit for number of cached connections.
+
+        Connections will start to be dropped when this limit is reached.  If 0,
+        all connections are immediately dropped.  None means no limit.
+
+        """
+        self._limit_total = nr_connections
+        self._enforce_limits()
+
+    def total_capacity(self):
+        """Return limit for number of cached connections."""
+        return self._limit_total
+
+    def set_capacity(self, scheme, nr_connections):
+        """Set limit for number of cached connections of specifed scheme.
+
+        scheme: URL scheme (eg. "http" or "ftp")
+
+        """
+        self._limit[scheme] = nr_connections
+        self._enforce_limits(scheme)
+
+    def capacity(self, scheme):
+        """Return limit for number of cached connections of specifed scheme.
+
+        scheme: URL scheme (eg. "http" or "ftp")
+
+        """
+        return self._limit[scheme]
+
+    def drop(self, checker=None, reason=None):
+        """Drop connections by some criteria.
+
+        checker: either a callable, a number, a string, or None:
+         If callable: called for each connection with arguments (conn, scheme,
+          key, deposit_time); if it returns a true value, the connection is
+          dropped (default is to drop all connections).
+         If a number: all connections untouched for the given number of seconds
+          or more are dropped.
+         If a string: all connections of the given scheme are dropped.
+         If None: all connections are dropped.
+        reason: passed on to the dropped() method
+
+        """
+        if not callable(checker):
+            if checker is None:
+                checker = lambda cr: True  # drop all of them
+            elif compat_isinstance(checker, StringTypes):
+                scheme = checker
+                if reason is None:
+                    reason = "drop %s" % scheme
+                checker = lambda cr, scheme=scheme: cr.scheme == scheme
+            else:  # numeric
+                age_limit = checker
+                time_limit = time.time() - age_limit
+                if reason is None:
+                    reason = "older than %s" % age_limit
+                checker = lambda cr, time_limit=time_limit: cr.time < time_limit
+        if reason is None:
+            reason = "drop"
+
+##         local $SIG{__DIE__};  # don't interfere with eval below
+##         local $@;
+        crs = []
+        for cr in self._conns:
+            if checker(cr):
+                self._dropping(cr, reason)
+                drop = drop + 1
+            if not drop:
+                crs.append(cr)
+        self._conns = crs
+
+    def prune(self):
+        """Drop all dead connections.
+
+        This is tested by calling the .ping() method on the connections.  If
+        the .ping() method exists and returns a false value, then the
+        connection is dropped.
+
+        """
+        # XXX HTTPConnection doesn't have a .ping() method
+        #self.drop(lambda cr: not cr.conn.ping(), "ping")
+        pass
+
+    def get_schemes(self):
+        """Return list of cached connection URL schemes."""
+        t = {}
+        for cr in self._conns:
+            t[cr.scheme] = None
+        return t.keys()
+
+    def get_connections(self, scheme=None):
+        """Return list of all connection objects with the specified URL scheme.
+
+        If no scheme is specified then all connections are returned.
+
+        """
+        cs = []
+        for cr in self._conns:
+            if scheme is None or (scheme and scheme == cr.scheme):
+                c.append(cr.conn)
+        return cs
+
+# -------------------------------------------------------------------------
+# Methods called by handlers to try to save away connections and get them
+# back again.
+
+    def deposit(self, scheme, key, conn):
+        """Add a new connection to the cache.
+
+        scheme: URL scheme (eg. "http")
+        key: any object that can act as a dict key (usually a string or a
+         tuple)
+
+        As a side effect, other already cached connections may be dropped.
+        Multiple connections with the same scheme/key might be added.
+
+        """
+        self._conns.append(_ConnectionRecord(conn, scheme, key, time.time()))
+        self._enforce_limits(scheme)
+
+    def withdraw(self, scheme, key):
+        """Try to fetch back a connection that was previously deposited.
+
+        If no cached connection with the specified scheme/key is found, then
+        None is returned.  There is no guarantee that a deposited connection
+        can be withdrawn, as the cache manger is free to drop connections at
+        any time.
+
+        """
+        conns = self._conns
+        for i in range(len(conns)):
+            cr = conns[i]
+            if not (cr.scheme == scheme and cr.key == key):
+                continue
+            conns.pop(i)  # remove it
+            return cr.conn
+        return None
+
+# -------------------------------------------------------------------------
+# Called internally.  Subclasses might want to override these.
+
+    def _enforce_limits(self, scheme=None):
+        """Drop some cached connections, if necessary.
+
+        Called after a new connection is added (deposited) in the cache or
+        capacity limits are adjusted.
+
+        The default implementation drops connections until the specified
+        capacity limits are not exceeded.
+
+        """
+        conns = self._conns
+        if scheme:
+            schemes = [scheme]
+        else:
+            schemes = self.get_schemes()
+        for scheme in schemes:
+            limit = self._limit.get(scheme)
+            if limit is None:
+                continue
+            for i in range(len(conns), 0, -1):
+                if conns[i].scheme != scheme:
+                    continue
+                limit = limit - 1
+                if limit < 0:
+                    self._dropping(
+                        conns.pop(i),
+                        "connection cache %s capacity exceeded" % scheme)
+
+        total = self._limit_total
+        if total is not None:
+            while len(conns) > total:
+                self._dropping(conns.pop(0),
+                               "connection cache total capacity exceeded")
+
+    def _dropping(self, conn_record, reason):
+        """Called when a connection is dropped.
+
+        conn_record: _ConnectionRecord instance for the dropped connection
+        reason: string describing the reason for the drop
+
+        """
+        debug("DROPPING %s [%s]" % (conn_record, reason))

Added: trunk/bigboard/libgmail/ClientCookie/_Debug.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/_Debug.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,49 @@
+import sys
+
+import ClientCookie
+
+try:
+    import warnings
+except ImportError:
+    def warn(text):
+        ClientCookie.WARNINGS_STREAM.write("WARNING: "+text)
+else:
+    def warn(text):
+        warnings.warn(text, stacklevel=2)
+
+try:
+    import logging
+except:
+    NOTSET = None
+    INFO = 20
+    DEBUG = 10
+    class NullHandler:
+        def write(self, data): pass
+    class Logger:
+        def __init__(self):
+            self.level = NOTSET
+            self.handler = NullHandler()
+        def log(self, level, text, *args):
+            if args:
+                text = text % args
+            if self.level is not None and level <= self.level:
+                self.handler.write(text+"\n")
+        def debug(self, text, *args):
+            apply(self.log, (DEBUG, text)+args)
+        def info(self, text, *args):
+            apply(self.log, (INFO, text)+args)
+        def setLevel(self, lvl):
+            self.level = lvl
+        def addHandler(self, handler):
+            self.handler = handler
+    LOGGER = Logger()
+    def getLogger(name): return LOGGER
+    class StreamHandler:
+        def __init__(self, strm=None):
+            if not strm:
+                strm = sys.stderr
+            self.stream = strm
+        def write(self, data):
+            self.stream.write(data)
+else:
+    from logging import getLogger, StreamHandler, INFO, DEBUG, NOTSET

Added: trunk/bigboard/libgmail/ClientCookie/_HeadersUtil.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/_HeadersUtil.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,233 @@
+"""Utility functions for HTTP header value parsing and construction.
+
+Copyright 1997-1998, Gisle Aas
+Copyright 2002-2006, John J. Lee
+
+This code is free software; you can redistribute it and/or modify it
+under the terms of the BSD or ZPL 2.1 licenses (see the file
+COPYING.txt included with the distribution).
+
+"""
+
+import os, re, string, urlparse
+from types import StringType
+try:
+    from types import UnicodeType
+    STRING_TYPES = StringType, UnicodeType
+except:
+    STRING_TYPES = StringType,
+
+from _Util import startswith, endswith, http2time
+
+try: True
+except NameError:
+    True = 1
+    False = 0
+
+def is_html(ct_headers, url, allow_xhtml=False):
+    """
+    ct_headers: Sequence of Content-Type headers
+    url: Response URL
+
+    """
+    if not ct_headers:
+        # guess
+        ext = os.path.splitext(urlparse.urlparse(url)[2])[1]
+        html_exts = [".htm", ".html"]
+        if allow_xhtml:
+            html_exts += [".xhtml"]
+        return ext in html_exts
+    # use first header
+    ct = split_header_words(ct_headers)[0][0][0]
+    html_types = ["text/html"]
+    if allow_xhtml:
+        html_types += [
+            "text/xhtml", "text/xml",
+            "application/xml", "application/xhtml+xml",
+            ]
+    return ct in html_types
+
+def unmatched(match):
+    """Return unmatched part of re.Match object."""
+    start, end = match.span(0)
+    return match.string[:start]+match.string[end:]
+
+token_re =        re.compile(r"^\s*([^=\s;,]+)")
+quoted_value_re = re.compile(r"^\s*=\s*\"([^\"\\]*(?:\\.[^\"\\]*)*)\"")
+value_re =        re.compile(r"^\s*=\s*([^\s;,]*)")
+escape_re = re.compile(r"\\(.)")
+def split_header_words(header_values):
+    r"""Parse header values into a list of lists containing key,value pairs.
+
+    The function knows how to deal with ",", ";" and "=" as well as quoted
+    values after "=".  A list of space separated tokens are parsed as if they
+    were separated by ";".
+
+    If the header_values passed as argument contains multiple values, then they
+    are treated as if they were a single value separated by comma ",".
+
+    This means that this function is useful for parsing header fields that
+    follow this syntax (BNF as from the HTTP/1.1 specification, but we relax
+    the requirement for tokens).
+
+      headers           = #header
+      header            = (token | parameter) *( [";"] (token | parameter))
+
+      token             = 1*<any CHAR except CTLs or separators>
+      separators        = "(" | ")" | "<" | ">" | "@"
+                        | "," | ";" | ":" | "\" | <">
+                        | "/" | "[" | "]" | "?" | "="
+                        | "{" | "}" | SP | HT
+
+      quoted-string     = ( <"> *(qdtext | quoted-pair ) <"> )
+      qdtext            = <any TEXT except <">>
+      quoted-pair       = "\" CHAR
+
+      parameter         = attribute "=" value
+      attribute         = token
+      value             = token | quoted-string
+
+    Each header is represented by a list of key/value pairs.  The value for a
+    simple token (not part of a parameter) is None.  Syntactically incorrect
+    headers will not necessarily be parsed as you would want.
+
+    This is easier to describe with some examples:
+
+    >>> split_header_words(['foo="bar"; port="80,81"; discard, bar=baz'])
+    [[('foo', 'bar'), ('port', '80,81'), ('discard', None)], [('bar', 'baz')]]
+    >>> split_header_words(['text/html; charset="iso-8859-1"'])
+    [[('text/html', None), ('charset', 'iso-8859-1')]]
+    >>> split_header_words([r'Basic realm="\"foo\bar\""'])
+    [[('Basic', None), ('realm', '"foobar"')]]
+
+    """
+    assert type(header_values) not in STRING_TYPES
+    result = []
+    for text in header_values:
+        orig_text = text
+        pairs = []
+        while text:
+            m = token_re.search(text)
+            if m:
+                text = unmatched(m)
+                name = m.group(1)
+                m = quoted_value_re.search(text)
+                if m:  # quoted value
+                    text = unmatched(m)
+                    value = m.group(1)
+                    value = escape_re.sub(r"\1", value)
+                else:
+                    m = value_re.search(text)
+                    if m:  # unquoted value
+                        text = unmatched(m)
+                        value = m.group(1)
+                        value = string.rstrip(value)
+                    else:
+                        # no value, a lone token
+                        value = None
+                pairs.append((name, value))
+            elif startswith(string.lstrip(text), ","):
+                # concatenated headers, as per RFC 2616 section 4.2
+                text = string.lstrip(text)[1:]
+                if pairs: result.append(pairs)
+                pairs = []
+            else:
+                # skip junk
+                non_junk, nr_junk_chars = re.subn("^[=\s;]*", "", text)
+                assert nr_junk_chars > 0, (
+                    "split_header_words bug: '%s', '%s', %s" %
+                    (orig_text, text, pairs))
+                text = non_junk
+        if pairs: result.append(pairs)
+    return result
+
+join_escape_re = re.compile(r"([\"\\])")
+def join_header_words(lists):
+    """Do the inverse of the conversion done by split_header_words.
+
+    Takes a list of lists of (key, value) pairs and produces a single header
+    value.  Attribute values are quoted if needed.
+
+    >>> join_header_words([[("text/plain", None), ("charset", "iso-8859/1")]])
+    'text/plain; charset="iso-8859/1"'
+    >>> join_header_words([[("text/plain", None)], [("charset", "iso-8859/1")]])
+    'text/plain, charset="iso-8859/1"'
+
+    """
+    headers = []
+    for pairs in lists:
+        attr = []
+        for k, v in pairs:
+            if v is not None:
+                if not re.search(r"^\w+$", v):
+                    v = join_escape_re.sub(r"\\\1", v)  # escape " and \
+                    v = '"%s"' % v
+                if k is None:  # Netscape cookies may have no name
+                    k = v
+                else:
+                    k = "%s=%s" % (k, v)
+            attr.append(k)
+        if attr: headers.append(string.join(attr, "; "))
+    return string.join(headers, ", ")
+
+def parse_ns_headers(ns_headers):
+    """Ad-hoc parser for Netscape protocol cookie-attributes.
+
+    The old Netscape cookie format for Set-Cookie can for instance contain
+    an unquoted "," in the expires field, so we have to use this ad-hoc
+    parser instead of split_header_words.
+
+    XXX This may not make the best possible effort to parse all the crap
+    that Netscape Cookie headers contain.  Ronald Tschalar's HTTPClient
+    parser is probably better, so could do worse than following that if
+    this ever gives any trouble.
+
+    Currently, this is also used for parsing RFC 2109 cookies.
+
+    """
+    known_attrs = ("expires", "domain", "path", "secure",
+                   # RFC 2109 attrs (may turn up in Netscape cookies, too)
+                   "port", "max-age")
+
+    result = []
+    for ns_header in ns_headers:
+        pairs = []
+        version_set = False
+        params = re.split(r";\s*", ns_header)
+        for ii in range(len(params)):
+            param = params[ii]
+            param = string.rstrip(param)
+            if param == "": continue
+            if "=" not in param:
+                k, v = param, None
+            else:
+                k, v = re.split(r"\s*=\s*", param, 1)
+                k = string.lstrip(k)
+            if ii != 0:
+                lc = string.lower(k)
+                if lc in known_attrs:
+                    k = lc
+                if k == "version":
+                    # This is an RFC 2109 cookie.
+                    version_set = True
+                if k == "expires":
+                    # convert expires date to seconds since epoch
+                    if startswith(v, '"'): v = v[1:]
+                    if endswith(v, '"'): v = v[:-1]
+                    v = http2time(v)  # None if invalid
+            pairs.append((k, v))
+
+        if pairs:
+            if not version_set:
+                pairs.append(("version", "0"))
+            result.append(pairs)
+
+    return result
+
+
+def _test():
+   import doctest, _HeadersUtil
+   return doctest.testmod(_HeadersUtil)
+
+if __name__ == "__main__":
+   _test()

Added: trunk/bigboard/libgmail/ClientCookie/_LWPCookieJar.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/_LWPCookieJar.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,188 @@
+"""Load / save to libwww-perl (LWP) format files.
+
+Actually, the format is slightly extended from that used by LWP's
+(libwww-perl's) HTTP::Cookies, to avoid losing some RFC 2965 information
+not recorded by LWP.
+
+It uses the version string "2.0", though really there isn't an LWP Cookies
+2.0 format.  This indicates that there is extra information in here
+(domain_dot and port_spec) while still being compatible with libwww-perl,
+I hope.
+
+Copyright 2002-2006 John J Lee <jjl pobox com>
+Copyright 1997-1999 Gisle Aas (original libwww-perl code)
+
+This code is free software; you can redistribute it and/or modify it
+under the terms of the BSD or ZPL 2.1 licenses (see the file
+COPYING.txt included with the distribution).
+
+"""
+
+import time, re, string
+from _ClientCookie import reraise_unmasked_exceptions, FileCookieJar, Cookie, \
+     MISSING_FILENAME_TEXT, LoadError
+from _HeadersUtil import join_header_words, split_header_words
+from _Util import startswith, iso2time, time2isoz
+from _Debug import getLogger
+debug = getLogger("ClientCookie").debug
+
+try: True
+except NameError:
+    True = 1
+    False = 0
+
+def lwp_cookie_str(cookie):
+    """Return string representation of Cookie in an the LWP cookie file format.
+
+    Actually, the format is extended a bit -- see module docstring.
+
+    """
+    h = [(cookie.name, cookie.value),
+         ("path", cookie.path),
+         ("domain", cookie.domain)]
+    if cookie.port is not None: h.append(("port", cookie.port))
+    if cookie.path_specified: h.append(("path_spec", None))
+    if cookie.port_specified: h.append(("port_spec", None))
+    if cookie.domain_initial_dot: h.append(("domain_dot", None))
+    if cookie.secure: h.append(("secure", None))
+    if cookie.expires: h.append(("expires",
+                               time2isoz(float(cookie.expires))))
+    if cookie.discard: h.append(("discard", None))
+    if cookie.comment: h.append(("comment", cookie.comment))
+    if cookie.comment_url: h.append(("commenturl", cookie.comment_url))
+    if cookie.rfc2109: h.append(("rfc2109", None))
+
+    keys = cookie.nonstandard_attr_keys()
+    keys.sort()
+    for k in keys:
+        h.append((k, str(cookie.get_nonstandard_attr(k))))
+
+    h.append(("version", str(cookie.version)))
+
+    return join_header_words([h])
+
+class LWPCookieJar(FileCookieJar):
+    """
+    The LWPCookieJar saves a sequence of"Set-Cookie3" lines.
+    "Set-Cookie3" is the format used by the libwww-perl libary, not known
+    to be compatible with any browser, but which is easy to read and
+    doesn't lose information about RFC 2965 cookies.
+
+    Additional methods
+
+    as_lwp_str(ignore_discard=True, ignore_expired=True)
+
+    """
+
+    magic_re = r"^\#LWP-Cookies-(\d+\.\d+)"
+
+    def as_lwp_str(self, ignore_discard=True, ignore_expires=True):
+        """Return cookies as a string of "\n"-separated "Set-Cookie3" headers.
+
+        ignore_discard and ignore_expires: see docstring for FileCookieJar.save
+
+        """
+        now = time.time()
+        r = []
+        for cookie in self:
+            if not ignore_discard and cookie.discard:
+                debug("   Not saving %s: marked for discard", cookie.name)
+                continue
+            if not ignore_expires and cookie.is_expired(now):
+                debug("   Not saving %s: expired", cookie.name)
+                continue
+            r.append("Set-Cookie3: %s" % lwp_cookie_str(cookie))
+        return string.join(r+[""], "\n")
+
+    def save(self, filename=None, ignore_discard=False, ignore_expires=False):
+        if filename is None:
+            if self.filename is not None: filename = self.filename
+            else: raise ValueError(MISSING_FILENAME_TEXT)
+
+        f = open(filename, "w")
+        try:
+            debug("Saving LWP cookies file")
+            # There really isn't an LWP Cookies 2.0 format, but this indicates
+            # that there is extra information in here (domain_dot and
+            # port_spec) while still being compatible with libwww-perl, I hope.
+            f.write("#LWP-Cookies-2.0\n")
+            f.write(self.as_lwp_str(ignore_discard, ignore_expires))
+        finally:
+            f.close()
+
+    def _really_load(self, f, filename, ignore_discard, ignore_expires):
+        magic = f.readline()
+        if not re.search(self.magic_re, magic):
+            msg = "%s does not seem to contain cookies" % filename
+            raise LoadError(msg)
+
+        now = time.time()
+
+        header = "Set-Cookie3:"
+        boolean_attrs = ("port_spec", "path_spec", "domain_dot",
+                         "secure", "discard", "rfc2109")
+        value_attrs = ("version",
+                       "port", "path", "domain",
+                       "expires",
+                       "comment", "commenturl")
+
+        try:
+            while 1:
+                line = f.readline()
+                if line == "": break
+                if not startswith(line, header):
+                    continue
+                line = string.strip(line[len(header):])
+
+                for data in split_header_words([line]):
+                    name, value = data[0]
+                    standard = {}
+                    rest = {}
+                    for k in boolean_attrs:
+                        standard[k] = False
+                    for k, v in data[1:]:
+                        if k is not None:
+                            lc = string.lower(k)
+                        else:
+                            lc = None
+                        # don't lose case distinction for unknown fields
+                        if (lc in value_attrs) or (lc in boolean_attrs):
+                            k = lc
+                        if k in boolean_attrs:
+                            if v is None: v = True
+                            standard[k] = v
+                        elif k in value_attrs:
+                            standard[k] = v
+                        else:
+                            rest[k] = v
+
+                    h = standard.get
+                    expires = h("expires")
+                    discard = h("discard")
+                    if expires is not None:
+                        expires = iso2time(expires)
+                    if expires is None:
+                        discard = True
+                    domain = h("domain")
+                    domain_specified = startswith(domain, ".")
+                    c = Cookie(h("version"), name, value,
+                               h("port"), h("port_spec"),
+                               domain, domain_specified, h("domain_dot"),
+                               h("path"), h("path_spec"),
+                               h("secure"),
+                               expires,
+                               discard,
+                               h("comment"),
+                               h("commenturl"),
+                               rest,
+                               h("rfc2109"),
+                               ) 
+                    if not ignore_discard and c.discard:
+                        continue
+                    if not ignore_expires and c.is_expired(now):
+                        continue
+                    self.set_cookie(c)
+        except:
+            reraise_unmasked_exceptions((IOError,))
+            raise LoadError("invalid Set-Cookie3 format file %s" % filename)
+

Added: trunk/bigboard/libgmail/ClientCookie/_MSIECookieJar.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/_MSIECookieJar.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,393 @@
+"""Microsoft Internet Explorer cookie loading on Windows.
+
+Copyright 2002-2003 Johnny Lee <typo_pl hotmail com> (MSIE Perl code)
+Copyright 2002-2006 John J Lee <jjl pobox com> (The Python port)
+
+This code is free software; you can redistribute it and/or modify it
+under the terms of the BSD or ZPL 2.1 licenses (see the file
+COPYING.txt included with the distribution).
+
+"""
+
+# XXX names and comments are not great here
+
+import os, re, string, time, struct
+if os.name == "nt":
+    import _winreg
+
+from _ClientCookie import FileCookieJar, CookieJar, Cookie, \
+     MISSING_FILENAME_TEXT, LoadError
+from _Util import startswith
+from _Debug import getLogger
+debug = getLogger("ClientCookie").debug
+
+try: True
+except NameError:
+    True = 1
+    False = 0
+
+
+def regload(path, leaf):
+    key = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, path, 0,
+                          _winreg.KEY_ALL_ACCESS)
+    try:
+        value = _winreg.QueryValueEx(key, leaf)[0]
+    except WindowsError:
+        value = None
+    return value
+
+WIN32_EPOCH = 0x019db1ded53e8000L  # 1970 Jan 01 00:00:00 in Win32 FILETIME
+
+def epoch_time_offset_from_win32_filetime(filetime):
+    """Convert from win32 filetime to seconds-since-epoch value.
+
+    MSIE stores create and expire times as Win32 FILETIME, which is 64
+    bits of 100 nanosecond intervals since Jan 01 1601.
+
+    ClientCookie expects time in 32-bit value expressed in seconds since the
+    epoch (Jan 01 1970).
+
+    """
+    if filetime < WIN32_EPOCH:
+        raise ValueError("filetime (%d) is before epoch (%d)" %
+                         (filetime, WIN32_EPOCH))
+
+    return divmod((filetime - WIN32_EPOCH), 10000000L)[0]
+
+def binary_to_char(c): return "%02X" % ord(c)
+def binary_to_str(d): return string.join(map(binary_to_char, list(d)), "")
+
+class MSIEBase:
+    magic_re = re.compile(r"Client UrlCache MMF Ver \d\.\d.*")
+    padding = "\x0d\xf0\xad\x0b"
+
+    msie_domain_re = re.compile(r"^([^/]+)(/.*)$")
+    cookie_re = re.compile("Cookie\:.+\@([\x21-\xFF]+).*?"
+                           "( +\ [\x21-\xFF]+\ txt)")
+
+    # path under HKEY_CURRENT_USER from which to get location of index.dat
+    reg_path = r"software\microsoft\windows" \
+               r"\currentversion\explorer\shell folders"
+    reg_key = "Cookies"
+
+    def __init__(self):
+        self._delayload_domains = {}
+
+    def _delayload_domain(self, domain):
+        # if necessary, lazily load cookies for this domain
+        delayload_info = self._delayload_domains.get(domain)
+        if delayload_info is not None:
+            cookie_file, ignore_discard, ignore_expires = delayload_info
+            try:
+                self.load_cookie_data(cookie_file,
+                                      ignore_discard, ignore_expires)
+            except (LoadError, IOError):
+                debug("error reading cookie file, skipping: %s", cookie_file)
+            else:
+                del self._delayload_domains[domain]
+
+    def _load_cookies_from_file(self, filename):
+        debug("Loading MSIE cookies file: %s", filename)
+        cookies = []
+
+        cookies_fh = open(filename)
+
+        try:
+            while 1:
+                key = cookies_fh.readline()
+                if key == "": break
+
+                rl = cookies_fh.readline
+                def getlong(rl=rl): return long(rl().rstrip())
+                def getstr(rl=rl): return rl().rstrip()
+
+                key = key.rstrip()
+                value = getstr()
+                domain_path = getstr()
+                flags = getlong()  # 0x2000 bit is for secure I think
+                lo_expire = getlong()
+                hi_expire = getlong()
+                lo_create = getlong()
+                hi_create = getlong()
+                sep = getstr()
+
+                if "" in (key, value, domain_path, flags, hi_expire, lo_expire,
+                          hi_create, lo_create, sep) or (sep != "*"):
+                    break
+
+                m = self.msie_domain_re.search(domain_path)
+                if m:
+                    domain = m.group(1)
+                    path = m.group(2)
+
+                    cookies.append({"KEY": key, "VALUE": value, "DOMAIN": domain,
+                                    "PATH": path, "FLAGS": flags, "HIXP": hi_expire,
+                                    "LOXP": lo_expire, "HICREATE": hi_create,
+                                    "LOCREATE": lo_create})
+        finally:
+            cookies_fh.close()
+
+        return cookies
+
+    def load_cookie_data(self, filename,
+                         ignore_discard=False, ignore_expires=False):
+        """Load cookies from file containing actual cookie data.
+
+        Old cookies are kept unless overwritten by newly loaded ones.
+
+        You should not call this method if the delayload attribute is set.
+
+        I think each of these files contain all cookies for one user, domain,
+        and path.
+
+        filename: file containing cookies -- usually found in a file like
+         C:\WINNT\Profiles\joe\Cookies\joe blah[1] txt
+
+        """
+        now = int(time.time())
+
+        cookie_data = self._load_cookies_from_file(filename)
+
+        for cookie in cookie_data:
+            flags = cookie["FLAGS"]
+            secure = ((flags & 0x2000) != 0)
+            filetime = (cookie["HIXP"] << 32) + cookie["LOXP"]
+            expires = epoch_time_offset_from_win32_filetime(filetime)
+            if expires < now:
+                discard = True
+            else:
+                discard = False
+            domain = cookie["DOMAIN"]
+            initial_dot = startswith(domain, ".")
+            if initial_dot:
+                domain_specified = True
+            else:
+                # MSIE 5 does not record whether the domain cookie-attribute
+                # was specified.
+                # Assuming it wasn't is conservative, because with strict
+                # domain matching this will match less frequently; with regular
+                # Netscape tail-matching, this will match at exactly the same
+                # times that domain_specified = True would.  It also means we
+                # don't have to prepend a dot to achieve consistency with our
+                # own & Mozilla's domain-munging scheme.
+                domain_specified = False
+
+            # assume path_specified is false
+            # XXX is there other stuff in here? -- eg. comment, commentURL?
+            c = Cookie(0,
+                       cookie["KEY"], cookie["VALUE"],
+                       None, False,
+                       domain, domain_specified, initial_dot,
+                       cookie["PATH"], False,
+                       secure,
+                       expires,
+                       discard,
+                       None,
+                       None,
+                       {"flags": flags})
+            if not ignore_discard and c.discard:
+                continue
+            if not ignore_expires and c.is_expired(now):
+                continue
+            CookieJar.set_cookie(self, c)
+
+    def load_from_registry(self, ignore_discard=False, ignore_expires=False,
+                           username=None):
+        """
+        username: only required on win9x
+
+        """
+        cookies_dir = regload(self.reg_path, self.reg_key)
+        filename = os.path.normpath(os.path.join(cookies_dir, "INDEX.DAT"))
+        self.load(filename, ignore_discard, ignore_expires, username)
+
+    def _really_load(self, index, filename, ignore_discard, ignore_expires,
+                     username):
+        now = int(time.time())
+
+        if username is None:
+            username = string.lower(os.environ['USERNAME'])
+
+        cookie_dir = os.path.dirname(filename)
+
+        data = index.read(256)
+        if len(data) != 256:
+            raise LoadError("%s file is too short" % filename)
+
+        # Cookies' index.dat file starts with 32 bytes of signature
+        # followed by an offset to the first record, stored as a little-
+        # endian DWORD.
+        sig, size, data = data[:32], data[32:36], data[36:]
+        size = struct.unpack("<L", size)[0]
+
+        # check that sig is valid
+        if not self.magic_re.match(sig) or size != 0x4000:
+            raise LoadError("%s ['%s' %s] does not seem to contain cookies" %
+                          (str(filename), sig, size))
+
+        # skip to start of first record
+        index.seek(size, 0)
+
+        sector = 128  # size of sector in bytes
+
+        while 1:
+            data = ""
+
+            # Cookies are usually in two contiguous sectors, so read in two
+            # sectors and adjust if not a Cookie.
+            to_read = 2 * sector
+            d = index.read(to_read)
+            if len(d) != to_read:
+                break
+            data = data + d
+
+            # Each record starts with a 4-byte signature and a count
+            # (little-endian DWORD) of sectors for the record.
+            sig, size, data = data[:4], data[4:8], data[8:]
+            size = struct.unpack("<L", size)[0]
+
+            to_read = (size - 2) * sector
+
+##             from urllib import quote
+##             print "data", quote(data)
+##             print "sig", quote(sig)
+##             print "size in sectors", size
+##             print "size in bytes", size*sector
+##             print "size in units of 16 bytes", (size*sector) / 16
+##             print "size to read in bytes", to_read
+##             print
+
+            if sig != "URL ":
+                assert (sig in ("HASH", "LEAK",
+                                self.padding, "\x00\x00\x00\x00"),
+                        "unrecognized MSIE index.dat record: %s" %
+                        binary_to_str(sig))
+                if sig == "\x00\x00\x00\x00":
+                    # assume we've got all the cookies, and stop
+                    break
+                if sig == self.padding:
+                    continue
+                # skip the rest of this record
+                assert to_read >= 0
+                if size != 2:
+                    assert to_read != 0
+                    index.seek(to_read, 1)
+                continue
+
+            # read in rest of record if necessary
+            if size > 2:
+                more_data = index.read(to_read)
+                if len(more_data) != to_read: break
+                data = data + more_data
+
+            cookie_re = ("Cookie\:%s\@([\x21-\xFF]+).*?" % username +
+                         "(%s\ [\x21-\xFF]+\ txt)" % username)
+            m = re.search(cookie_re, data, re.I)
+            if m:
+                cookie_file = os.path.join(cookie_dir, m.group(2))
+                if not self.delayload:
+                    try:
+                        self.load_cookie_data(cookie_file,
+                                              ignore_discard, ignore_expires)
+                    except (LoadError, IOError):
+                        debug("error reading cookie file, skipping: %s",
+                              cookie_file)
+                else:
+                    domain = m.group(1)
+                    i = domain.find("/")
+                    if i != -1:
+                        domain = domain[:i]
+
+                    self._delayload_domains[domain] = (
+                        cookie_file, ignore_discard, ignore_expires)
+
+
+class MSIECookieJar(MSIEBase, FileCookieJar):
+    """FileCookieJar that reads from the Windows MSIE cookies database.
+
+    MSIECookieJar can read the cookie files of Microsoft Internet Explorer
+    (MSIE) for Windows version 5 on Windows NT and version 6 on Windows XP and
+    Windows 98.  Other configurations may also work, but are untested.  Saving
+    cookies in MSIE format is NOT supported.  If you save cookies, they'll be
+    in the usual Set-Cookie3 format, which you can read back in using an
+    instance of the plain old CookieJar class.  Don't save using the same
+    filename that you loaded cookies from, because you may succeed in
+    clobbering your MSIE cookies index file!
+
+    You should be able to have LWP share Internet Explorer's cookies like
+    this (note you need to supply a username to load_from_registry if you're on
+    Windows 9x or Windows ME):
+
+    cj = MSIECookieJar(delayload=1)
+    # find cookies index file in registry and load cookies from it
+    cj.load_from_registry()
+    opener = ClientCookie.build_opener(ClientCookie.HTTPCookieProcessor(cj))
+    response = opener.open("http://example.com/";)
+
+    Iterating over a delayloaded MSIECookieJar instance will not cause any
+    cookies to be read from disk.  To force reading of all cookies from disk,
+    call read_all_cookies.  Note that the following methods iterate over self:
+    clear_temporary_cookies, clear_expired_cookies, __len__, __repr__, __str__
+    and as_string.
+
+    Additional methods:
+
+    load_from_registry(ignore_discard=False, ignore_expires=False,
+                       username=None)
+    load_cookie_data(filename, ignore_discard=False, ignore_expires=False)
+    read_all_cookies()
+
+    """
+    def __init__(self, filename=None, delayload=False, policy=None):
+        MSIEBase.__init__(self)
+        FileCookieJar.__init__(self, filename, delayload, policy)
+
+    def set_cookie(self, cookie):
+        if self.delayload:
+            self._delayload_domain(cookie.domain)
+        CookieJar.set_cookie(self, cookie)
+
+    def _cookies_for_request(self, request):
+        """Return a list of cookies to be returned to server."""
+        domains = self._cookies.copy()
+        domains.update(self._delayload_domains)
+        domains = domains.keys()
+
+        cookies = []
+        for domain in domains:
+            cookies.extend(self._cookies_for_domain(domain, request))
+        return cookies
+
+    def _cookies_for_domain(self, domain, request):
+        if not self._policy.domain_return_ok(domain, request):
+            return []
+        debug("Checking %s for cookies to return", domain)
+        if self.delayload:
+            self._delayload_domain(domain)
+        return CookieJar._cookies_for_domain(self, domain, request)
+
+    def read_all_cookies(self):
+        """Eagerly read in all cookies."""
+        if self.delayload:
+            for domain in self._delayload_domains.keys():
+                self._delayload_domain(domain)
+
+    def load(self, filename, ignore_discard=False, ignore_expires=False,
+             username=None):
+        """Load cookies from an MSIE 'index.dat' cookies index file.
+
+        filename: full path to cookie index file
+        username: only required on win9x
+
+        """
+        if filename is None:
+            if self.filename is not None: filename = self.filename
+            else: raise ValueError(MISSING_FILENAME_TEXT)
+
+        index = open(filename, "rb")
+
+        try:
+            self._really_load(index, filename, ignore_discard, ignore_expires,
+                              username)
+        finally:
+            index.close()

Added: trunk/bigboard/libgmail/ClientCookie/_MSIEDBCookieJar.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/_MSIEDBCookieJar.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,140 @@
+"""Persistent CookieJar based on MS Internet Explorer cookie database.
+
+Copyright 2003-2006 John J Lee <jjl pobox com>
+
+This code is free software; you can redistribute it and/or modify it
+under the terms of the BSD or ZPL 2.1 licenses (see the file
+COPYING.txt included with the distribution).
+
+**********************************************************************
+THIS DOESN'T WORK!
+
+It's just a sketch, to check the base class is OK.
+
+**********************************************************************
+
+"""
+
+from ClientCookie import MSIEBase, CookieJar
+from _Util import time2netscape
+
+def set_cookie_hdr_from_cookie(cookie):
+    params = []
+    if cookie.name is not None:
+        params.append("%s=%s" % cookie.name, cookie.value)
+    else:
+        params.append(cookie.name)
+    if cookie.expires:
+        params.append("expires=" % time2netscape(cookie.expires))
+    if cookie.domain_specified:
+        params.append("Domain=%s" % cookie.domain)
+    if cookie.path_specified:
+        params.append("path=%s" % cookie.path)
+    if cookie.port_specified:
+        if cookie.port is None:
+            params.append("Port")
+        else:
+            params.append("Port=%s" % cookie.port)
+    if cookie.secure:
+        params.append("secure")
+##     if cookie.comment:
+##         params.append("Comment=%s" % cookie.comment)
+##     if cookie.comment_url:
+##         params.append("CommentURL=%s" % cookie.comment_url)
+    return "; ".join(params)
+
+class MSIEDBCookieJar(MSIEBase, CookieJar):
+    """A CookieJar that relies on MS Internet Explorer's cookie database.
+
+    XXX Require ctypes or write C extension?  win32all probably requires
+    latter.
+
+    **********************************************************************
+    THIS DOESN'T WORK!
+
+    It's just a sketch, to check the base class is OK.
+
+    **********************************************************************
+
+    MSIEDBCookieJar, unlike MSIECookieJar, keeps no state for itself, but
+    relies on the MS Internet Explorer's cookie database.  It uses the win32
+    API functions InternetGetCookie() and InternetSetCookie(), from the wininet
+    library.
+
+    Note that MSIE itself may impose additional conditions on cookie processing
+    on top of that done by CookiePolicy.  For cookie setting, the class tries
+    to foil that by providing the request details and Set-Cookie header it
+    thinks MSIE wants to see.  For returning cookies to the server, it's up to
+    MSIE.
+
+    Note that session cookies ARE NOT written to disk and won't be accessible
+    from other processes.  .clear_session_cookies() has no effect.
+
+    .clear_expired_cookies() has no effect: MSIE is responsible for this.
+
+    .clear() will raise NotImplementedError unless all three arguments are
+    given.
+
+    """
+    def __init__(self, policy=None):
+        MSIEBase.__init__(self)
+        FileCookieJar.__init__(self, policy)
+    def clear_session_cookies(self): pass
+    def clear_expired_cookies(self): pass
+    def clear(self, domain=None, path=None, name=None):
+        if None in [domain, path, name]:
+            raise NotImplementedError()
+        # XXXX
+        url = self._fake_url(domain, path)
+        hdr = "%s=; domain=%s; path=%s; max-age=0" % (name, domain, path)
+        r = windll.InternetSetCookie(url, None, hdr)
+        # XXX return value of InternetSetCookie?
+    def _fake_url(self, domain, path):
+        # to convince MSIE that Set-Cookie is OK
+        return "http://%s%s"; % (domain, path)
+    def set_cookie(self, cookie):
+        # XXXX
+        url = self._fake_url(cookie.domain, cookie.path)
+        r = windll.InternetSetCookie(
+            url, None, set_cookie_hdr_from_cookie(cookie))
+        # XXX return value of InternetSetCookie?
+    def add_cookie_header(self, request, unverifiable=False):
+        # XXXX
+        cookie_header = windll.InternetGetCookie(request.get_full_url())
+        # XXX return value of InternetGetCookie?
+        request.add_unredirected_header(cookie_header)
+    def __iter__(self):
+        self._load_index_dat()
+        return CookieJar.__iter__(self)
+    def _cookies_for_request(self, request):
+        raise NotImplementedError()  # XXXX
+    def _cookies_for_domain(self, domain, request):
+        #raise NotImplementedError()  # XXXX
+        debug("Checking %s for cookies to return", domain)
+        if not self._policy.domain_return_ok(domain, request):
+            return []
+
+        # XXXX separate out actual loading of cookie data, so only index.dat is
+        #  read in ._load_index_dat(), and ._really_load() calls that, then
+        #  ._delayload_domain for all domains if not self.delayload.
+        #  We then just call ._load_index_dat()
+        self._delayload = False
+        self._really_load()
+
+        cookies_by_path = self._cookies.get(domain)
+        if cookies_by_path is None:
+            return []
+
+        cookies = []
+        for path in cookies_by_path.keys():
+            if not self._policy.path_return_ok(path, request, unverifiable):
+                continue
+            for name, cookie in cookies_by_path[path].items():
+                if not self._policy.return_ok(cookie, request, unverifiable):
+                    debug("   not returning cookie")
+                    continue
+                debug("   it's a match")
+                cookies.append(cookie)
+
+        return cookies
+

Added: trunk/bigboard/libgmail/ClientCookie/_MozillaCookieJar.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/_MozillaCookieJar.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,173 @@
+"""Mozilla / Netscape cookie loading / saving.
+
+Copyright 2002-2006 John J Lee <jjl pobox com>
+Copyright 1997-1999 Gisle Aas (original libwww-perl code)
+
+This code is free software; you can redistribute it and/or modify it
+under the terms of the BSD or ZPL 2.1 licenses (see the file
+COPYING.txt included with the distribution).
+
+"""
+
+import re, string, time
+
+from _ClientCookie import reraise_unmasked_exceptions, FileCookieJar, Cookie, \
+     MISSING_FILENAME_TEXT, LoadError
+from _Util import startswith, endswith
+from _Debug import getLogger
+debug = getLogger("ClientCookie").debug
+
+try: True
+except NameError:
+    True = 1
+    False = 0
+
+try: issubclass(Exception(), (Exception,))
+except TypeError:
+    real_issubclass = issubclass
+    from _Util import compat_issubclass
+    issubclass = compat_issubclass
+    del compat_issubclass
+
+
+class MozillaCookieJar(FileCookieJar):
+    """
+
+    WARNING: you may want to backup your browser's cookies file if you use
+    this class to save cookies.  I *think* it works, but there have been
+    bugs in the past!
+
+    This class differs from CookieJar only in the format it uses to save and
+    load cookies to and from a file.  This class uses the Mozilla/Netscape
+    `cookies.txt' format.  lynx uses this file format, too.
+
+    Don't expect cookies saved while the browser is running to be noticed by
+    the browser (in fact, Mozilla on unix will overwrite your saved cookies if
+    you change them on disk while it's running; on Windows, you probably can't
+    save at all while the browser is running).
+
+    Note that the Mozilla/Netscape format will downgrade RFC2965 cookies to
+    Netscape cookies on saving.
+
+    In particular, the cookie version and port number information is lost,
+    together with information about whether or not Path, Port and Discard were
+    specified by the Set-Cookie2 (or Set-Cookie) header, and whether or not the
+    domain as set in the HTTP header started with a dot (yes, I'm aware some
+    domains in Netscape files start with a dot and some don't -- trust me, you
+    really don't want to know any more about this).
+
+    Note that though Mozilla and Netscape use the same format, they use
+    slightly different headers.  The class saves cookies using the Netscape
+    header by default (Mozilla can cope with that).
+
+    """
+    magic_re = "#( Netscape)? HTTP Cookie File"
+    header = """\
+    # Netscape HTTP Cookie File
+    # http://www.netscape.com/newsref/std/cookie_spec.html
+    # This is a generated file!  Do not edit.
+
+"""
+
+    def _really_load(self, f, filename, ignore_discard, ignore_expires):
+        now = time.time()
+
+        magic = f.readline()
+        if not re.search(self.magic_re, magic):
+            f.close()
+            raise LoadError(
+                "%s does not look like a Netscape format cookies file" %
+                filename)
+
+        try:
+            while 1:
+                line = f.readline()
+                if line == "": break
+
+                # last field may be absent, so keep any trailing tab
+                if endswith(line, "\n"): line = line[:-1]
+
+                # skip comments and blank lines XXX what is $ for?
+                if (startswith(string.strip(line), "#") or
+                    startswith(string.strip(line), "$") or
+                    string.strip(line) == ""):
+                    continue
+
+                domain, domain_specified, path, secure, expires, name, value = \
+                        string.split(line, "\t")
+                secure = (secure == "TRUE")
+                domain_specified = (domain_specified == "TRUE")
+                if name == "":
+                    name = value
+                    value = None
+
+                initial_dot = startswith(domain, ".")
+                assert domain_specified == initial_dot
+
+                discard = False
+                if expires == "":
+                    expires = None
+                    discard = True
+
+                # assume path_specified is false
+                c = Cookie(0, name, value,
+                           None, False,
+                           domain, domain_specified, initial_dot,
+                           path, False,
+                           secure,
+                           expires,
+                           discard,
+                           None,
+                           None,
+                           {})
+                if not ignore_discard and c.discard:
+                    continue
+                if not ignore_expires and c.is_expired(now):
+                    continue
+                self.set_cookie(c)
+
+        except:
+            reraise_unmasked_exceptions((IOError,))
+            raise LoadError("invalid Netscape format file %s: %s" %
+                          (filename, line))
+
+    def save(self, filename=None, ignore_discard=False, ignore_expires=False):
+        if filename is None:
+            if self.filename is not None: filename = self.filename
+            else: raise ValueError(MISSING_FILENAME_TEXT)
+
+        f = open(filename, "w")
+        try:
+            debug("Saving Netscape cookies.txt file")
+            f.write(self.header)
+            now = time.time()
+            for cookie in self:
+                if not ignore_discard and cookie.discard:
+                    debug("   Not saving %s: marked for discard", cookie.name)
+                    continue
+                if not ignore_expires and cookie.is_expired(now):
+                    debug("   Not saving %s: expired", cookie.name)
+                    continue
+                if cookie.secure: secure = "TRUE"
+                else: secure = "FALSE"
+                if startswith(cookie.domain, "."): initial_dot = "TRUE"
+                else: initial_dot = "FALSE"
+                if cookie.expires is not None:
+                    expires = str(cookie.expires)
+                else:
+                    expires = ""
+                if cookie.value is None:
+                    # cookies.txt regards 'Set-Cookie: foo' as a cookie
+                    # with no name, whereas cookielib regards it as a
+                    # cookie with no value.
+                    name = ""
+                    value = cookie.name
+                else:
+                    name = cookie.name
+                    value = cookie.value
+                f.write(
+                    string.join([cookie.domain, initial_dot, cookie.path,
+                                 secure, expires, name, value], "\t")+
+                    "\n")
+        finally:
+            f.close()

Added: trunk/bigboard/libgmail/ClientCookie/_Opener.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/_Opener.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,197 @@
+"""Integration with Python standard library module urllib2: OpenerDirector
+class.
+
+Copyright 2004-2006 John J Lee <jjl pobox com>
+
+This code is free software; you can redistribute it and/or modify it
+under the terms of the BSD or ZPL 2.1 licenses (see the file
+COPYING.txt included with the distribution).
+
+"""
+
+try: True
+except NameError:
+    True = 1
+    False = 0
+
+import urllib2, string, bisect, urlparse
+
+from _Util import startswith, isstringlike
+from _Request import Request
+
+def methnames(obj):
+    """Return method names of class instance.
+
+    dir(obj) doesn't work across Python versions, this does.
+
+    """
+    return methnames_of_instance_as_dict(obj).keys()
+
+def methnames_of_instance_as_dict(inst):
+    names = {}
+    names.update(methnames_of_class_as_dict(inst.__class__))
+    for methname in dir(inst):
+        candidate = getattr(inst, methname)
+        if callable(candidate):
+            names[methname] = None
+    return names
+
+def methnames_of_class_as_dict(klass):
+    names = {}
+    for methname in dir(klass):
+        candidate = getattr(klass, methname)
+        if callable(candidate):
+            names[methname] = None
+    for baseclass in klass.__bases__:
+        names.update(methnames_of_class_as_dict(baseclass))
+    return names
+
+
+class OpenerMixin:
+    def _request(self, url_or_req, data):
+        if isstringlike(url_or_req):
+            req = Request(url_or_req, data)
+        else:
+            # already a urllib2.Request or ClientCookie.Request instance
+            req = url_or_req
+            if data is not None:
+                req.add_data(data)
+        return req
+
+    def retrieve(self, fullurl, filename=None, reporthook=None, data=None):
+        """Returns (filename, headers).
+
+        For remote objects, the default filename will refer to a temporary
+        file.
+
+        """
+        req = self._request(fullurl, data)
+        type_ = req.get_type()
+        fp = self.open(req)
+        headers = fp.info()
+        if filename is None and type == 'file':
+            return url2pathname(req.get_selector()), headers
+        if filename:
+            tfp = open(filename, 'wb')
+        else:
+            path = urlparse(fullurl)[2]
+            suffix = os.path.splitext(path)[1]
+            tfp = tempfile.TemporaryFile("wb", suffix=suffix)
+        result = filename, headers
+        bs = 1024*8
+        size = -1
+        read = 0
+        blocknum = 1
+        if reporthook:
+            if headers.has_key("content-length"):
+                size = int(headers["Content-Length"])
+            reporthook(0, bs, size)
+        while 1:
+            block = fp.read(bs)
+            read += len(block)
+            if reporthook:
+                reporthook(blocknum, bs, size)
+            blocknum = blocknum + 1
+            if not block:
+                break
+            tfp.write(block)
+        fp.close()
+        tfp.close()
+        del fp
+        del tfp
+        if size>=0 and read<size:
+            raise IOError("incomplete retrieval error",
+                          "got only %d bytes out of %d" % (read,size))
+        return result
+
+
+class OpenerDirector(urllib2.OpenerDirector, OpenerMixin):
+    def __init__(self):
+        urllib2.OpenerDirector.__init__(self)
+        self.process_response = {}
+        self.process_request = {}
+
+    def add_handler(self, handler):
+        added = False
+        for meth in methnames(handler):
+            i = string.find(meth, "_")
+            protocol = meth[:i]
+            condition = meth[i+1:]
+
+            if startswith(condition, "error"):
+                j = string.find(meth[i+1:], "_") + i + 1
+                kind = meth[j+1:]
+                try:
+                    kind = int(kind)
+                except ValueError:
+                    pass
+                lookup = self.handle_error.get(protocol, {})
+                self.handle_error[protocol] = lookup
+            elif (condition == "open" and
+                  protocol not in ["do", "proxy"]):  # hack -- see below
+                kind = protocol
+                lookup = self.handle_open
+            elif (condition in ["response", "request"] and
+                  protocol != "redirect"):  # yucky hack
+                # hack above is to fix HTTPRedirectHandler problem, which
+                # appears to above line to be a processor because of the
+                # redirect_request method :-((
+                kind = protocol
+                lookup = getattr(self, "process_"+condition)
+            else:
+                continue
+
+            if lookup.has_key(kind):
+                bisect.insort(lookup[kind], handler)
+            else:
+                lookup[kind] = [handler]
+            added = True
+            continue
+
+        if added:
+            # XXX why does self.handlers need to be sorted?
+            bisect.insort(self.handlers, handler)
+            handler.add_parent(self)
+
+    def open(self, fullurl, data=None):
+        req = self._request(fullurl, data)
+        type_ = req.get_type()
+
+        # pre-process request
+        # XXX should we allow a Processor to change the type (URL
+        #   scheme) of the request?
+        meth_name = type_+"_request"
+        for processor in self.process_request.get(type_, []):
+            meth = getattr(processor, meth_name)
+            req = meth(req)
+
+        response = urllib2.OpenerDirector.open(self, req, data)
+
+        # post-process response
+        meth_name = type_+"_response"
+        for processor in self.process_response.get(type_, []):
+            meth = getattr(processor, meth_name)
+            response = meth(req, response)
+
+        return response
+
+    def error(self, proto, *args):
+        if proto in ['http', 'https']:
+            # XXX http[s] protocols are special-cased
+            dict = self.handle_error['http'] # https is not different than http
+            proto = args[2]  # YUCK!
+            meth_name = 'http_error_%s' % proto
+            http_err = 1
+            orig_args = args
+        else:
+            dict = self.handle_error
+            meth_name = proto + '_error'
+            http_err = 0
+        args = (dict, proto, meth_name) + args
+        result = apply(self._call_chain, args)
+        if result:
+            return result
+
+        if http_err:
+            args = (dict, 'default', 'http_error_default') + orig_args
+            return apply(self._call_chain, args)

Added: trunk/bigboard/libgmail/ClientCookie/_Request.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/_Request.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,73 @@
+"""Integration with Python standard library module urllib2: Request class.
+
+Copyright 2004-2006 John J Lee <jjl pobox com>
+
+This code is free software; you can redistribute it and/or modify it
+under the terms of the BSD or ZPL 2.1 licenses (see the file
+COPYING.txt included with the distribution).
+
+"""
+
+try: True
+except NameError:
+    True = 1
+    False = 0
+
+import urllib2, string
+
+from _ClientCookie import request_host
+
+
+class Request(urllib2.Request):
+    def __init__(self, url, data=None, headers={},
+             origin_req_host=None, unverifiable=False):
+        urllib2.Request.__init__(self, url, data, headers)
+        self.unredirected_hdrs = {}
+
+        # All the terminology below comes from RFC 2965.
+        self.unverifiable = unverifiable
+        # Set request-host of origin transaction.
+        # The origin request-host is needed in order to decide whether
+        # unverifiable sub-requests (automatic redirects, images embedded
+        # in HTML, etc.) are to third-party hosts.  If they are, the
+        # resulting transactions might need to be conducted with cookies
+        # turned off.
+        if origin_req_host is None:
+            origin_req_host = request_host(self)
+        self.origin_req_host = origin_req_host
+
+    def get_origin_req_host(self):
+        return self.origin_req_host
+
+    def is_unverifiable(self):
+        return self.unverifiable
+
+    def add_unredirected_header(self, key, val):
+        """Add a header that will not be added to a redirected request."""
+        self.unredirected_hdrs[string.capitalize(key)] = val
+
+    def has_header(self, header_name):
+        """True iff request has named header (regular or unredirected)."""
+        if (self.headers.has_key(header_name) or
+            self.unredirected_hdrs.has_key(header_name)):
+            return True
+        return False
+
+    def get_header(self, header_name, default=None):
+        return self.headers.get(
+            header_name,
+            self.unredirected_hdrs.get(header_name, default))
+
+    def header_items(self):
+        hdrs = self.unredirected_hdrs.copy()
+        hdrs.update(self.headers)
+        return hdrs.items()
+
+    def __str__(self):
+        return "<Request for %s>" % self.get_full_url()
+
+    def get_method(self):
+        if self.has_data():
+            return "POST"
+        else:
+            return "GET"

Added: trunk/bigboard/libgmail/ClientCookie/_Util.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/_Util.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,671 @@
+"""Python backwards-compat., date/time routines, seekable file object wrapper.
+
+ Copyright 2002-2006 John J Lee <jjl pobox com>
+
+This code is free software; you can redistribute it and/or modify it
+under the terms of the BSD or ZPL 2.1 licenses (see the file
+COPYING.txt included with the distribution).
+
+"""
+
+try: True
+except NameError:
+    True = 1
+    False = 0
+
+import re, string, time, copy, urllib
+from types import TupleType
+from cStringIO import StringIO
+
+try:
+    from exceptions import StopIteration
+except ImportError:
+    from ClientCookie._ClientCookie import StopIteration
+
+def startswith(string, initial):
+    if len(initial) > len(string): return False
+    return string[:len(initial)] == initial
+
+def endswith(string, final):
+    if len(final) > len(string): return False
+    return string[-len(final):] == final
+
+def compat_issubclass(obj, tuple_or_class):
+    # for 2.1 and below
+    if type(tuple_or_class) == TupleType:
+        for klass in tuple_or_class:
+            if issubclass(obj, klass):
+                return True
+        return False
+    return issubclass(obj, tuple_or_class)
+
+def compat_isinstance(obj, tuple_or_class):
+    # for 2.1 and below
+    if type(tuple_or_class) == TupleType:
+        for klass in tuple_or_class:
+            if isinstance(obj, klass):
+                return True
+        return False
+    return isinstance(obj, tuple_or_class)
+
+def isstringlike(x):
+    try: x+""
+    except: return False
+    else: return True
+
+SPACE_DICT = {}
+for c in string.whitespace:
+    SPACE_DICT[c] = None
+del c
+def isspace(string):
+    for c in string:
+        if not SPACE_DICT.has_key(c): return False
+    return True
+
+# this is here rather than in _HeadersUtil as it's just for
+# compatibility with old Python versions, rather than entirely new code
+def getheaders(msg, name):
+    """Get all values for a header.
+
+    This returns a list of values for headers given more than once; each
+    value in the result list is stripped in the same way as the result of
+    getheader().  If the header is not given, return an empty list.
+    """
+    result = []
+    current = ''
+    have_header = 0
+    for s in msg.getallmatchingheaders(name):
+        if isspace(s[0]):
+            if current:
+                current = "%s\n %s" % (current, string.strip(s))
+            else:
+                current = string.strip(s)
+        else:
+            if have_header:
+                result.append(current)
+            current = string.strip(s[string.find(s, ":") + 1:])
+            have_header = 1
+    if have_header:
+        result.append(current)
+    return result
+
+try:
+    from calendar import timegm
+    timegm((2045, 1, 1, 22, 23, 32))  # overflows in 2.1
+except:
+    # Number of days per month (except for February in leap years)
+    mdays = [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
+
+    # Return 1 for leap years, 0 for non-leap years
+    def isleap(year):
+	return year % 4 == 0 and (year % 100 <> 0 or year % 400 == 0)
+
+    # Return number of leap years in range [y1, y2)
+    # Assume y1 <= y2 and no funny (non-leap century) years
+    def leapdays(y1, y2):
+	return (y2+3)/4 - (y1+3)/4
+
+    EPOCH = 1970
+    def timegm(tuple):
+        """Unrelated but handy function to calculate Unix timestamp from GMT."""
+        year, month, day, hour, minute, second = tuple[:6]
+        assert year >= EPOCH
+        assert 1 <= month <= 12
+        days = 365*(year-EPOCH) + leapdays(EPOCH, year)
+        for i in range(1, month):
+            days = days + mdays[i]
+        if month > 2 and isleap(year):
+            days = days + 1
+        days = days + day - 1
+        hours = days*24 + hour
+        minutes = hours*60 + minute
+        seconds = minutes*60L + second
+        return seconds
+
+
+# Date/time conversion routines for formats used by the HTTP protocol.
+
+EPOCH = 1970
+def my_timegm(tt):
+    year, month, mday, hour, min, sec = tt[:6]
+    if ((year >= EPOCH) and (1 <= month <= 12) and (1 <= mday <= 31) and
+        (0 <= hour <= 24) and (0 <= min <= 59) and (0 <= sec <= 61)):
+        return timegm(tt)
+    else:
+        return None
+
+days = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]
+months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun",
+          "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"]
+months_lower = []
+for month in months: months_lower.append(string.lower(month))
+
+
+def time2isoz(t=None):
+    """Return a string representing time in seconds since epoch, t.
+
+    If the function is called without an argument, it will use the current
+    time.
+
+    The format of the returned string is like "YYYY-MM-DD hh:mm:ssZ",
+    representing Universal Time (UTC, aka GMT).  An example of this format is:
+
+    1994-11-24 08:49:37Z
+
+    """
+    if t is None: t = time.time()
+    year, mon, mday, hour, min, sec = time.gmtime(t)[:6]
+    return "%04d-%02d-%02d %02d:%02d:%02dZ" % (
+        year, mon, mday, hour, min, sec)
+
+def time2netscape(t=None):
+    """Return a string representing time in seconds since epoch, t.
+
+    If the function is called without an argument, it will use the current
+    time.
+
+    The format of the returned string is like this:
+
+    Wed, DD-Mon-YYYY HH:MM:SS GMT
+
+    """
+    if t is None: t = time.time()
+    year, mon, mday, hour, min, sec, wday = time.gmtime(t)[:7]
+    return "%s %02d-%s-%04d %02d:%02d:%02d GMT" % (
+        days[wday], mday, months[mon-1], year, hour, min, sec)
+
+
+UTC_ZONES = {"GMT": None, "UTC": None, "UT": None, "Z": None}
+
+timezone_re = re.compile(r"^([-+])?(\d\d?):?(\d\d)?$")
+def offset_from_tz_string(tz):
+    offset = None
+    if UTC_ZONES.has_key(tz):
+        offset = 0
+    else:
+        m = timezone_re.search(tz)
+        if m:
+            offset = 3600 * int(m.group(2))
+            if m.group(3):
+                offset = offset + 60 * int(m.group(3))
+            if m.group(1) == '-':
+                offset = -offset
+    return offset
+
+def _str2time(day, mon, yr, hr, min, sec, tz):
+    # translate month name to number
+    # month numbers start with 1 (January)
+    try:
+        mon = months_lower.index(string.lower(mon))+1
+    except ValueError:
+        # maybe it's already a number
+        try:
+            imon = int(mon)
+        except ValueError:
+            return None
+        if 1 <= imon <= 12:
+            mon = imon
+        else:
+            return None
+
+    # make sure clock elements are defined
+    if hr is None: hr = 0
+    if min is None: min = 0
+    if sec is None: sec = 0
+
+    yr = int(yr)
+    day = int(day)
+    hr = int(hr)
+    min = int(min)
+    sec = int(sec)
+
+    if yr < 1000:
+	# find "obvious" year
+	cur_yr = time.localtime(time.time())[0]
+	m = cur_yr % 100
+	tmp = yr
+	yr = yr + cur_yr - m
+	m = m - tmp
+        if abs(m) > 50:
+            if m > 0: yr = yr + 100
+            else: yr = yr - 100
+
+    # convert UTC time tuple to seconds since epoch (not timezone-adjusted)
+    t = my_timegm((yr, mon, day, hr, min, sec, tz))
+
+    if t is not None:
+        # adjust time using timezone string, to get absolute time since epoch
+        if tz is None:
+            tz = "UTC"
+        tz = string.upper(tz)
+        offset = offset_from_tz_string(tz)
+        if offset is None:
+            return None
+        t = t - offset
+
+    return t
+
+
+strict_re = re.compile(r"^[SMTWF][a-z][a-z], (\d\d) ([JFMASOND][a-z][a-z]) (\d\d\d\d) (\d\d):(\d\d):(\d\d) GMT$")
+wkday_re = re.compile(
+    r"^(?:Sun|Mon|Tue|Wed|Thu|Fri|Sat)[a-z]*,?\s*", re.I)
+loose_http_re = re.compile(
+    r"""^
+    (\d\d?)            # day
+       (?:\s+|[-\/])
+    (\w+)              # month
+        (?:\s+|[-\/])
+    (\d+)              # year
+    (?:
+	  (?:\s+|:)    # separator before clock
+       (\d\d?):(\d\d)  # hour:min
+       (?::(\d\d))?    # optional seconds
+    )?                 # optional clock
+       \s*
+    ([-+]?\d{2,4}|(?![APap][Mm]\b)[A-Za-z]+)? # timezone
+       \s*
+    (?:\(\w+\))?       # ASCII representation of timezone in parens.
+       \s*$""", re.X)
+def http2time(text):
+    """Returns time in seconds since epoch of time represented by a string.
+
+    Return value is an integer.
+
+    None is returned if the format of str is unrecognized, the time is outside
+    the representable range, or the timezone string is not recognized.  If the
+    string contains no timezone, UTC is assumed.
+
+    The timezone in the string may be numerical (like "-0800" or "+0100") or a
+    string timezone (like "UTC", "GMT", "BST" or "EST").  Currently, only the
+    timezone strings equivalent to UTC (zero offset) are known to the function.
+
+    The function loosely parses the following formats:
+
+    Wed, 09 Feb 1994 22:23:32 GMT       -- HTTP format
+    Tuesday, 08-Feb-94 14:15:29 GMT     -- old rfc850 HTTP format
+    Tuesday, 08-Feb-1994 14:15:29 GMT   -- broken rfc850 HTTP format
+    09 Feb 1994 22:23:32 GMT            -- HTTP format (no weekday)
+    08-Feb-94 14:15:29 GMT              -- rfc850 format (no weekday)
+    08-Feb-1994 14:15:29 GMT            -- broken rfc850 format (no weekday)
+
+    The parser ignores leading and trailing whitespace.  The time may be
+    absent.
+
+    If the year is given with only 2 digits, the function will select the
+    century that makes the year closest to the current date.
+
+    """
+    # fast exit for strictly conforming string
+    m = strict_re.search(text)
+    if m:
+        g = m.groups()
+        mon = months_lower.index(string.lower(g[1])) + 1
+        tt = (int(g[2]), mon, int(g[0]),
+              int(g[3]), int(g[4]), float(g[5]))
+        return my_timegm(tt)
+
+    # No, we need some messy parsing...
+
+    # clean up
+    text = string.lstrip(text)
+    text = wkday_re.sub("", text, 1)  # Useless weekday
+
+    # tz is time zone specifier string
+    day, mon, yr, hr, min, sec, tz = [None]*7
+
+    # loose regexp parse
+    m = loose_http_re.search(text)
+    if m is not None:
+        day, mon, yr, hr, min, sec, tz = m.groups()
+    else:
+        return None  # bad format
+
+    return _str2time(day, mon, yr, hr, min, sec, tz)
+
+
+iso_re = re.compile(
+    """^
+    (\d{4})              # year
+       [-\/]?
+    (\d\d?)              # numerical month
+       [-\/]?
+    (\d\d?)              # day
+   (?:
+         (?:\s+|[-:Tt])  # separator before clock
+      (\d\d?):?(\d\d)    # hour:min
+      (?::?(\d\d(?:\.\d*)?))?  # optional seconds (and fractional)
+   )?                    # optional clock
+      \s*
+   ([-+]?\d\d?:?(:?\d\d)?
+    |Z|z)?               # timezone  (Z is "zero meridian", i.e. GMT)
+      \s*$""", re.X)
+def iso2time(text):
+    """
+    As for http2time, but parses the ISO 8601 formats:
+
+    1994-02-03 14:15:29 -0100    -- ISO 8601 format
+    1994-02-03 14:15:29          -- zone is optional
+    1994-02-03                   -- only date
+    1994-02-03T14:15:29          -- Use T as separator
+    19940203T141529Z             -- ISO 8601 compact format
+    19940203                     -- only date
+
+    """
+    # clean up
+    text = string.lstrip(text)
+
+    # tz is time zone specifier string
+    day, mon, yr, hr, min, sec, tz = [None]*7
+
+    # loose regexp parse
+    m = iso_re.search(text)
+    if m is not None:
+        # XXX there's an extra bit of the timezone I'm ignoring here: is
+        #   this the right thing to do?
+        yr, mon, day, hr, min, sec, tz, _ = m.groups()
+    else:
+        return None  # bad format
+
+    return _str2time(day, mon, yr, hr, min, sec, tz)
+
+
+# XXX Andrew Dalke kindly sent me a similar class in response to my request on
+# comp.lang.python, which I then proceeded to lose.  I wrote this class
+# instead, but I think he's released his code publicly since, could pinch the
+# tests from it, at least...
+
+# For testing seek_wrapper invariant (note that
+# test_urllib2.HandlerTest.test_seekable is expected to fail when this
+# invariant checking is turned on).  The invariant checking is done by module
+# ipdc, which is available here:
+# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/436834
+## from ipdbc import ContractBase
+## class seek_wrapper(ContractBase):
+class seek_wrapper:
+    """Adds a seek method to a file object.
+
+    This is only designed for seeking on readonly file-like objects.
+
+    Wrapped file-like object must have a read method.  The readline method is
+    only supported if that method is present on the wrapped object.  The
+    readlines method is always supported.  xreadlines and iteration are
+    supported only for Python 2.2 and above.
+
+    Public attribute: wrapped (the wrapped file object).
+
+    WARNING: All other attributes of the wrapped object (ie. those that are not
+    one of wrapped, read, readline, readlines, xreadlines, __iter__ and next)
+    are passed through unaltered, which may or may not make sense for your
+    particular file object.
+
+    """
+    # General strategy is to check that cache is full enough, then delegate to
+    # the cache (self.__cache, which is a cStringIO.StringIO instance).  A seek
+    # position (self.__pos) is maintained independently of the cache, in order
+    # that a single cache may be shared between multiple seek_wrapper objects.
+    # Copying using module copy shares the cache in this way.
+
+    def __init__(self, wrapped):
+        self.wrapped = wrapped
+        self.__have_readline = hasattr(self.wrapped, "readline")
+        self.__cache = StringIO()
+        self.__pos = 0  # seek position
+
+    def invariant(self):
+        # The end of the cache is always at the same place as the end of the
+        # wrapped file.
+        return self.wrapped.tell() == len(self.__cache.getvalue())
+
+    def __getattr__(self, name):
+        wrapped = self.__dict__.get("wrapped")
+        if wrapped:
+            return getattr(wrapped, name)
+        return getattr(self.__class__, name)
+
+    def seek(self, offset, whence=0):
+        assert whence in [0,1,2]
+
+        # how much data, if any, do we need to read?
+        if whence == 2:  # 2: relative to end of *wrapped* file
+            if offset < 0: raise ValueError("negative seek offset")
+            # since we don't know yet where the end of that file is, we must
+            # read everything
+            to_read = None
+        else:
+            if whence == 0:  # 0: absolute
+                if offset < 0: raise ValueError("negative seek offset")
+                dest = offset
+            else:  # 1: relative to current position
+                pos = self.__pos
+                if pos < offset:
+                    raise ValueError("seek to before start of file")
+                dest = pos + offset
+            end = len(self.__cache.getvalue())
+            to_read = dest - end
+            if to_read < 0:
+                to_read = 0
+
+        if to_read != 0:
+            self.__cache.seek(0, 2)
+            if to_read is None:
+                assert whence == 2
+                self.__cache.write(self.wrapped.read())
+                self.__pos = self.__cache.tell() - offset
+            else:
+                self.__cache.write(self.wrapped.read(to_read))
+                # Don't raise an exception even if we've seek()ed past the end
+                # of .wrapped, since fseek() doesn't complain in that case.
+                # Also like fseek(), pretend we have seek()ed past the end,
+                # i.e. not:
+                #self.__pos = self.__cache.tell()
+                # but rather:
+                self.__pos = dest
+        else:
+            self.__pos = dest
+
+    def tell(self):
+        return self.__pos
+
+    def __copy__(self):
+        cpy = self.__class__(self.wrapped)
+        cpy.__cache = self.__cache
+        return cpy
+
+    def read(self, size=-1):
+        pos = self.__pos
+        end = len(self.__cache.getvalue())
+        available = end - pos
+
+        # enough data already cached?
+        if size <= available and size != -1:
+            self.__cache.seek(pos)
+            self.__pos = pos+size
+            return self.__cache.read(size)
+
+        # no, so read sufficient data from wrapped file and cache it
+        self.__cache.seek(0, 2)
+        if size == -1:
+            self.__cache.write(self.wrapped.read())
+        else:
+            to_read = size - available
+            assert to_read > 0
+            self.__cache.write(self.wrapped.read(to_read))
+        self.__cache.seek(pos)
+
+        data = self.__cache.read(size)
+        self.__pos = self.__cache.tell()
+        assert self.__pos == pos + len(data)
+        return data
+
+    def readline(self, size=-1):
+        if not self.__have_readline:
+            raise NotImplementedError("no readline method on wrapped object")
+
+        # line we're about to read might not be complete in the cache, so
+        # read another line first
+        pos = self.__pos
+        self.__cache.seek(0, 2)
+        self.__cache.write(self.wrapped.readline())
+        self.__cache.seek(pos)
+
+        data = self.__cache.readline()
+        if size != -1:
+            r = data[:size]
+            self.__pos = pos+size
+        else:
+            r = data
+            self.__pos = pos+len(data)
+        return r
+
+    def readlines(self, sizehint=-1):
+        pos = self.__pos
+        self.__cache.seek(0, 2)
+        self.__cache.write(self.wrapped.read())
+        self.__cache.seek(pos)
+        data = self.__cache.readlines(sizehint)
+        self.__pos = self.__cache.tell()
+        return data
+
+    def __iter__(self): return self
+    def next(self):
+        line = self.readline()
+        if line == "": raise StopIteration
+        return line
+
+    xreadlines = __iter__
+
+    def __repr__(self):
+        return ("<%s at %s whose wrapped object = %r>" %
+                (self.__class__.__name__, hex(id(self)), self.wrapped))
+
+
+class response_seek_wrapper(seek_wrapper):
+
+    """
+    Supports copying response objects and setting response body data.
+
+    """
+
+    def __init__(self, wrapped):
+        seek_wrapper.__init__(self, wrapped)
+        self._headers = self.wrapped.info()
+
+    def __copy__(self):
+        cpy = seek_wrapper.__copy__(self)
+        # copy headers from delegate
+        cpy._headers = copy.copy(self.info())
+        return cpy
+
+    def info(self):
+        return self._headers
+
+    def set_data(self, data):
+        self.seek(0)
+        self.read()
+        self.close()
+        cache = self._seek_wrapper__cache = StringIO()
+        cache.write(data)
+        self.seek(0)
+
+
+class eoffile:
+    # file-like object that always claims to be at end-of-file...
+    def read(self, size=-1): return ""
+    def readline(self, size=-1): return ""
+    def __iter__(self): return self
+    def next(self): return ""
+    def close(self): pass
+
+class eofresponse(eoffile):
+    def __init__(self, url, headers, code, msg):
+        self._url = url
+        self._headers = headers
+        self.code = code
+        self.msg = msg
+    def geturl(self): return self._url
+    def info(self): return self._headers
+
+
+class closeable_response:
+    """Avoids unnecessarily clobbering urllib.addinfourl methods on .close().
+
+    Only supports responses returned by ClientCookie.HTTPHandler.
+
+    After .close(), the following methods are supported:
+
+    .read()
+    .readline()
+    .readlines()
+    .seek()
+    .tell()
+    .info()
+    .geturl()
+    .__iter__()
+    .next()
+    .close()
+
+    and the following attributes are supported:
+
+    .code
+    .msg
+
+    Also supports pickling (but the stdlib currently does something to prevent
+    it: http://python.org/sf/1144636).
+
+    """
+
+    def __init__(self, fp, headers, url, code, msg):
+        self._set_fp(fp)
+        self._headers = headers
+        self._url = url
+        self.code = code
+        self.msg = msg
+
+    def _set_fp(self, fp):
+        self.fp = fp
+        self.read = self.fp.read
+        self.readline = self.fp.readline
+        if hasattr(self.fp, "readlines"): self.readlines = self.fp.readlines
+        if hasattr(self.fp, "fileno"):
+            self.fileno = self.fp.fileno
+        else:
+            self.fileno = lambda: None
+        if hasattr(self.fp, "__iter__"):
+            self.__iter__ = self.fp.__iter__
+            if hasattr(self.fp, "next"):
+                self.next = self.fp.next
+
+    def __repr__(self):
+        return '<%s at %s whose fp = %r>' % (
+            self.__class__.__name__, hex(id(self)), self.fp)
+
+    def info(self):
+        return self._headers
+
+    def geturl(self):
+        return self._url
+
+    def close(self):
+        wrapped = self.fp
+        wrapped.close()
+        new_wrapped = eofresponse(
+            self._url, self._headers, self.code, self.msg)
+        self._set_fp(new_wrapped)
+
+    def __getstate__(self):
+        # There are three obvious options here:
+        # 1. truncate
+        # 2. read to end
+        # 3. close socket, pickle state including read position, then open
+        #    again on unpickle and use Range header
+
+        # 2 breaks pickle protocol, because one expects the original object
+        # to be left unscathed by pickling.  3 is too complicated and
+        # surprising (and too much work ;-) to happen in a sane __getstate__.
+        # So we do 1.
+
+        state = self.__dict__.copy()
+        new_wrapped = eofresponse(
+            self._url, self._headers, self.code, self.msg)
+        state["wrapped"] = new_wrapped
+        return state

Added: trunk/bigboard/libgmail/ClientCookie/__init__.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/__init__.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,72 @@
+import sys
+
+try: True
+except NameError:
+    True = 1
+    False = 0
+
+# If you hate the idea of turning bugs into warnings, do:
+# import ClientCookie; ClientCookie.USE_BARE_EXCEPT = False
+USE_BARE_EXCEPT = True
+WARNINGS_STREAM = sys.stdout
+
+# Import names so that they can be imported directly from the package, like
+# this:
+#from ClientCookie import <whatever>
+
+# These work like equivalents from logging.  Use logging direct if you
+# have 2.3.
+from _Debug import getLogger, StreamHandler, NOTSET, INFO, DEBUG
+
+from _ClientCookie import VERSION, __doc__, \
+     Cookie, \
+     CookiePolicy, DefaultCookiePolicy, \
+     CookieJar, FileCookieJar, LoadError, request_host
+from _LWPCookieJar import LWPCookieJar, lwp_cookie_str
+from _MozillaCookieJar import MozillaCookieJar
+from _MSIECookieJar import MSIECookieJar
+try:
+    import bsddb
+except ImportError:
+    pass
+else:
+    from _BSDDBCookieJar import BSDDBCookieJar, CreateBSDDBCookieJar
+#from _MSIEDBCookieJar import MSIEDBCookieJar
+#from _ConnCache import ConnectionCache
+try:
+    from urllib2 import AbstractHTTPHandler
+except ImportError:
+    pass
+else:
+    from ClientCookie._urllib2_support import \
+         Request, \
+         OpenerDirector, build_opener, install_opener, urlopen, \
+         OpenerFactory, urlretrieve, BaseHandler, HeadParser
+    try:
+        from ClientCookie._urllib2_support import XHTMLCompatibleHeadParser
+    except ImportError:
+        pass
+    from ClientCookie._urllib2_support import \
+         HTTPHandler, HTTPRedirectHandler, \
+         HTTPRequestUpgradeProcessor, \
+         HTTPEquivProcessor, SeekableProcessor, HTTPCookieProcessor, \
+         HTTPRefererProcessor, \
+         HTTPRefreshProcessor, HTTPErrorProcessor, \
+         HTTPResponseDebugProcessor, HTTPRedirectDebugProcessor
+
+    try:
+        import robotparser
+    except ImportError:
+        pass
+    else:
+        from ClientCookie._urllib2_support import \
+             HTTPRobotRulesProcessor, RobotExclusionError
+        del robotparser
+
+    import httplib
+    if hasattr(httplib, 'HTTPS'):
+        from ClientCookie._urllib2_support import HTTPSHandler
+    del AbstractHTTPHandler, httplib
+from _Util import http2time, response_seek_wrapper
+str2time = http2time
+del http2time

Added: trunk/bigboard/libgmail/ClientCookie/_urllib2_support.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/ClientCookie/_urllib2_support.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,839 @@
+"""Integration with Python standard library module urllib2.
+
+Also includes a redirection bugfix, support for parsing HTML HEAD blocks for
+the META HTTP-EQUIV tag contents, and following Refresh header redirects.
+
+Copyright 2002-2006 John J Lee <jjl pobox com>
+
+This code is free software; you can redistribute it and/or modify it
+under the terms of the BSD or ZPL 2.1 licenses (see the file
+COPYING.txt included with the distribution).
+
+"""
+
+import copy, time, tempfile, htmlentitydefs, re
+
+import ClientCookie
+from _ClientCookie import CookieJar, request_host
+from _Util import isstringlike, startswith, getheaders, closeable_response
+from _HeadersUtil import is_html
+from _Debug import getLogger
+debug = getLogger("ClientCookie.cookies").debug
+
+try: True
+except NameError:
+    True = 1
+    False = 0
+
+
+CHUNK = 1024  # size of chunks fed to HTML HEAD parser, in bytes
+DEFAULT_ENCODING = 'latin-1'
+
+try:
+    from urllib2 import AbstractHTTPHandler
+except ImportError:
+    pass
+else:
+    import urlparse, urllib2, urllib, httplib
+    import sgmllib
+    # monkeypatch to fix http://www.python.org/sf/803422 :-(
+    sgmllib.charref = re.compile("&#(x?[0-9a-fA-F]+)[^0-9a-fA-F]")
+    from urllib2 import URLError, HTTPError
+    import types, string, socket
+    from cStringIO import StringIO
+    try:
+        import threading
+        _threading = threading; del threading
+    except ImportError:
+        import dummy_threading
+        _threading = dummy_threading; del dummy_threading
+
+    from _Util import response_seek_wrapper
+    from _Request import Request
+
+
+    class BaseHandler(urllib2.BaseHandler):
+        handler_order = 500
+
+        def __cmp__(self, other):
+            if not hasattr(other, "handler_order"):
+                # Try to preserve the old behavior of having custom classes
+                # inserted after default ones (works only for custom user
+                # classes which are not aware of handler_order).
+                return 0
+            return cmp(self.handler_order, other.handler_order)
+
+
+    # This fixes a bug in urllib2 as of Python 2.1.3 and 2.2.2
+    #  (http://www.python.org/sf/549151)
+    # 2.2.3 is broken here (my fault!), 2.3 is fixed.
+    class HTTPRedirectHandler(BaseHandler):
+        # maximum number of redirections to any single URL
+        # this is needed because of the state that cookies introduce
+        max_repeats = 4
+        # maximum total number of redirections (regardless of URL) before
+        # assuming we're in a loop
+        max_redirections = 10
+
+        # Implementation notes:
+
+        # To avoid the server sending us into an infinite loop, the request
+        # object needs to track what URLs we have already seen.  Do this by
+        # adding a handler-specific attribute to the Request object.  The value
+        # of the dict is used to count the number of times the same URL has
+        # been visited.  This is needed because visiting the same URL twice
+        # does not necessarily imply a loop, thanks to state introduced by
+        # cookies.
+
+        # Always unhandled redirection codes:
+        # 300 Multiple Choices: should not handle this here.
+        # 304 Not Modified: no need to handle here: only of interest to caches
+        #     that do conditional GETs
+        # 305 Use Proxy: probably not worth dealing with here
+        # 306 Unused: what was this for in the previous versions of protocol??
+
+        def redirect_request(self, newurl, req, fp, code, msg, headers):
+            """Return a Request or None in response to a redirect.
+
+            This is called by the http_error_30x methods when a redirection
+            response is received.  If a redirection should take place, return a
+            new Request to allow http_error_30x to perform the redirect;
+            otherwise, return None to indicate that an HTTPError should be
+            raised.
+
+            """
+            if code in (301, 302, 303, "refresh") or \
+                   (code == 307 and not req.has_data()):
+                # Strictly (according to RFC 2616), 301 or 302 in response to
+                # a POST MUST NOT cause a redirection without confirmation
+                # from the user (of urllib2, in this case).  In practice,
+                # essentially all clients do redirect in this case, so we do
+                # the same.
+                return Request(newurl,
+                               headers=req.headers,
+                               origin_req_host=req.get_origin_req_host(),
+                               unverifiable=True)
+            else:
+                raise HTTPError(req.get_full_url(), code, msg, headers, fp)
+
+        def http_error_302(self, req, fp, code, msg, headers):
+            # Some servers (incorrectly) return multiple Location headers
+            # (so probably same goes for URI).  Use first header.
+            if headers.has_key('location'):
+                newurl = getheaders(headers, 'location')[0]
+            elif headers.has_key('uri'):
+                newurl = getheaders(headers, 'uri')[0]
+            else:
+                return
+            newurl = urlparse.urljoin(req.get_full_url(), newurl)
+
+            # XXX Probably want to forget about the state of the current
+            # request, although that might interact poorly with other
+            # handlers that also use handler-specific request attributes
+            new = self.redirect_request(newurl, req, fp, code, msg, headers)
+            if new is None:
+                return
+
+            # loop detection
+            # .redirect_dict has a key url if url was previously visited.
+            if hasattr(req, 'redirect_dict'):
+                visited = new.redirect_dict = req.redirect_dict
+                if (visited.get(newurl, 0) >= self.max_repeats or
+                    len(visited) >= self.max_redirections):
+                    raise HTTPError(req.get_full_url(), code,
+                                    self.inf_msg + msg, headers, fp)
+            else:
+                visited = new.redirect_dict = req.redirect_dict = {}
+            visited[newurl] = visited.get(newurl, 0) + 1
+
+            # Don't close the fp until we are sure that we won't use it
+            # with HTTPError.  
+            fp.read()
+            fp.close()
+
+            return self.parent.open(new)
+
+        http_error_301 = http_error_303 = http_error_307 = http_error_302
+        http_error_refresh = http_error_302
+
+        inf_msg = "The HTTP server returned a redirect error that would " \
+                  "lead to an infinite loop.\n" \
+                  "The last 30x error message was:\n"
+
+
+    class HTTPRequestUpgradeProcessor(BaseHandler):
+        # upgrade urllib2.Request to this module's Request
+        # yuck!
+        handler_order = 0  # before anything else
+
+        def http_request(self, request):
+            if not hasattr(request, "add_unredirected_header"):
+                newrequest = Request(request._Request__original, request.data,
+                                     request.headers)
+                try: newrequest.origin_req_host = request.origin_req_host
+                except AttributeError: pass
+                try: newrequest.unverifiable = request.unverifiable
+                except AttributeError: pass
+                request = newrequest
+            return request
+
+        https_request = http_request
+
+
+    # -------------------------------------------------------------------
+    # Beware, the following encoding code is cut-and-pasted between
+    # ClientCookie, ClientForm, mechanize and pullparser, and they differ
+    # subtly :-(((
+    # This particular variant is identical to that in mechanize.
+
+    def unescape(data, entities, encoding):
+        if data is None or "&" not in data:
+            return data
+
+        def replace_entities(match, entities=entities, encoding=encoding):
+            ent = match.group()
+            if ent[1] == "#":
+                return unescape_charref(ent[2:-1], encoding)
+
+            repl = entities.get(ent[1:-1])
+            if repl is not None:
+                repl = unichr(repl)
+                if type(repl) != type(""):
+                    try:
+                        repl = repl.encode(encoding)
+                    except UnicodeError:
+                        repl = ent
+            else:
+                repl = ent
+            return repl
+
+        return re.sub(r"&#?[A-Za-z0-9]+?;", replace_entities, data)
+
+    def unescape_charref(data, encoding):
+        name, base = data, 10
+        if name.startswith("x"):
+            name, base= name[1:], 16
+        uc = unichr(int(name, base))
+        if encoding is None:
+            return uc
+        else:
+            try:
+                repl = uc.encode(encoding)
+            except UnicodeError:
+                repl = "&#%s;" % data
+            return repl
+
+    def get_entitydefs():
+        from codecs import latin_1_decode
+        try:
+            htmlentitydefs.name2codepoint
+        except AttributeError:
+            entitydefs = {}
+            for name, char in htmlentitydefs.entitydefs.items():
+                uc = latin_1_decode(char)[0]
+                if uc.startswith("&#") and uc.endswith(";"):
+                    uc = unescape_charref(uc[2:-1], None)
+                codepoint = ord(uc)
+                entitydefs[name] = codepoint
+        else:
+            entitydefs = htmlentitydefs.name2codepoint
+        return entitydefs
+
+    # -------------------------------------------------------------------
+
+
+    # XXX would self.reset() work, instead of raising this exception?
+    class EndOfHeadError(Exception): pass
+    class AbstractHeadParser:
+        # only these elements are allowed in or before HEAD of document
+        head_elems = ("html", "head",
+                      "title", "base",
+                      "script", "style", "meta", "link", "object")
+        _entitydefs = get_entitydefs()
+        _encoding = DEFAULT_ENCODING
+
+        def __init__(self):
+            self.http_equiv = []
+
+        def start_meta(self, attrs):
+            http_equiv = content = None
+            for key, value in attrs:
+                if key == "http-equiv":
+                    http_equiv = self.unescape_attr_if_required(value)
+                elif key == "content":
+                    content = self.unescape_attr_if_required(value)
+            if http_equiv is not None:
+                self.http_equiv.append((http_equiv, content))
+
+        def end_head(self):
+            raise EndOfHeadError()
+
+        def handle_entityref(self, name):
+            #debug("%s", name)
+            self.handle_data(unescape(
+                '&%s;' % name, self._entitydefs, self._encoding))
+
+        def handle_charref(self, name):
+            #debug("%s", name)
+            self.handle_data(unescape_charref(name, self._encoding))
+
+        def unescape_attr(self, name):
+            #debug("%s", name)
+            return unescape(name, self._entitydefs, self._encoding)
+
+        def unescape_attrs(self, attrs):
+            #debug("%s", attrs)
+            escaped_attrs = {}
+            for key, val in attrs.items():
+                escaped_attrs[key] = self.unescape_attr(val)
+            return escaped_attrs
+
+        def unknown_entityref(self, ref):
+            self.handle_data("&%s;" % ref)
+
+        def unknown_charref(self, ref):
+            self.handle_data("&#%s;" % ref)
+
+
+    try:
+        import HTMLParser
+    except ImportError:
+        pass
+    else:
+        class XHTMLCompatibleHeadParser(AbstractHeadParser,
+                                        HTMLParser.HTMLParser):
+            def __init__(self):
+                HTMLParser.HTMLParser.__init__(self)
+                AbstractHeadParser.__init__(self)
+
+            def handle_starttag(self, tag, attrs):
+                if tag not in self.head_elems:
+                    raise EndOfHeadError()
+                try:
+                    method = getattr(self, 'start_' + tag)
+                except AttributeError:
+                    try:
+                        method = getattr(self, 'do_' + tag)
+                    except AttributeError:
+                        pass # unknown tag
+                    else:
+                        method(attrs)
+                else:
+                    method(attrs)
+
+            def handle_endtag(self, tag):
+                if tag not in self.head_elems:
+                    raise EndOfHeadError()
+                try:
+                    method = getattr(self, 'end_' + tag)
+                except AttributeError:
+                    pass # unknown tag
+                else:
+                    method()
+
+            def unescape(self, name):
+                # Use the entitydefs passed into constructor, not
+                # HTMLParser.HTMLParser's entitydefs.
+                return self.unescape_attr(name)
+
+            def unescape_attr_if_required(self, name):
+                return name  # HTMLParser.HTMLParser already did it
+
+    class HeadParser(AbstractHeadParser, sgmllib.SGMLParser):
+
+        def _not_called(self):
+            assert False
+
+        def __init__(self):
+            sgmllib.SGMLParser.__init__(self)
+            AbstractHeadParser.__init__(self)
+
+        def handle_starttag(self, tag, method, attrs):
+            if tag not in self.head_elems:
+                raise EndOfHeadError()
+            if tag == "meta":
+                method(attrs)
+
+        def unknown_starttag(self, tag, attrs):
+            self.handle_starttag(tag, self._not_called, attrs)
+
+        def handle_endtag(self, tag, method):
+            if tag in self.head_elems:
+                method()
+            else:
+                raise EndOfHeadError()
+
+        def unescape_attr_if_required(self, name):
+            return self.unescape_attr(name)
+
+    def parse_head(fileobj, parser):
+        """Return a list of key, value pairs."""
+        while 1:
+            data = fileobj.read(CHUNK)
+            try:
+                parser.feed(data)
+            except EndOfHeadError:
+                break
+            if len(data) != CHUNK:
+                # this should only happen if there is no HTML body, or if
+                # CHUNK is big
+                break
+        return parser.http_equiv
+
+    class HTTPEquivProcessor(BaseHandler):
+        """Append META HTTP-EQUIV headers to regular HTTP headers."""
+
+        handler_order = 300  # before handlers that look at HTTP headers
+
+        def __init__(self, head_parser_class=HeadParser,
+                     i_want_broken_xhtml_support=False,
+                     ):
+            self.head_parser_class = head_parser_class
+            self._allow_xhtml = i_want_broken_xhtml_support
+
+        def http_response(self, request, response):
+            if not hasattr(response, "seek"):
+                response = response_seek_wrapper(response)
+            headers = response.info()
+            url = response.geturl()
+            ct_hdrs = getheaders(response.info(), "content-type")
+            if is_html(ct_hdrs, url, self._allow_xhtml):
+                try:
+                    try:
+                        html_headers = parse_head(response, self.head_parser_class())
+                    finally:
+                        response.seek(0)
+                except (HTMLParser.HTMLParseError,
+                        sgmllib.SGMLParseError):
+                    pass
+                else:
+                    for hdr, val in html_headers:
+                        # rfc822.Message interprets this as appending, not clobbering
+                        headers[hdr] = val
+            return response
+
+        https_response = http_response
+
+    # XXX ATM this only takes notice of http responses -- probably
+    #   should be independent of protocol scheme (http, ftp, etc.)
+    class SeekableProcessor(BaseHandler):
+        """Make responses seekable."""
+
+        def http_response(self, request, response):
+            if not hasattr(response, "seek"):
+                return response_seek_wrapper(response)
+            return response
+
+        https_response = http_response
+
+    class HTTPCookieProcessor(BaseHandler):
+        """Handle HTTP cookies.
+
+        Public attributes:
+
+        cookiejar: CookieJar instance
+
+        """
+        def __init__(self, cookiejar=None):
+            if cookiejar is None:
+                cookiejar = CookieJar()
+            self.cookiejar = cookiejar
+
+        def http_request(self, request):
+            self.cookiejar.add_cookie_header(request)
+            return request
+
+        def http_response(self, request, response):
+            self.cookiejar.extract_cookies(response, request)
+            return response
+
+        https_request = http_request
+        https_response = http_response
+
+    try:
+        import robotparser
+    except ImportError:
+        pass
+    else:
+        class RobotExclusionError(urllib2.HTTPError):
+            def __init__(self, request, *args):
+                apply(urllib2.HTTPError.__init__, (self,)+args)
+                self.request = request
+
+        class HTTPRobotRulesProcessor(BaseHandler):
+            # before redirections and response debugging, after everything else
+            handler_order = 800
+
+            try:
+                from httplib import HTTPMessage
+            except:
+                from mimetools import Message
+                http_response_class = Message
+            else:
+                http_response_class = HTTPMessage
+
+            def __init__(self, rfp_class=robotparser.RobotFileParser):
+                self.rfp_class = rfp_class
+                self.rfp = None
+                self._host = None
+
+            def http_request(self, request):
+                host = request.get_host()
+                scheme = request.get_type()
+                if host != self._host:
+                    self.rfp = self.rfp_class()
+                    self.rfp.set_url(scheme+"://"+host+"/robots.txt")
+                    self.rfp.read()
+                    self._host = host
+
+                ua = request.get_header("User-agent", "")
+                if self.rfp.can_fetch(ua, request.get_full_url()):
+                    return request
+                else:
+                    msg = "request disallowed by robots.txt"
+                    raise RobotExclusionError(
+                        request,
+                        request.get_full_url(),
+                        403, msg,
+                        self.http_response_class(StringIO()), StringIO(msg))
+
+            https_request = http_request
+
+    class HTTPRefererProcessor(BaseHandler):
+        """Add Referer header to requests.
+
+        This only makes sense if you use each RefererProcessor for a single
+        chain of requests only (so, for example, if you use a single
+        HTTPRefererProcessor to fetch a series of URLs extracted from a single
+        page, this will break).
+
+        There's a proper implementation of this in module mechanize.
+
+        """
+        def __init__(self):
+            self.referer = None
+
+        def http_request(self, request):
+            if ((self.referer is not None) and
+                not request.has_header("Referer")):
+                request.add_unredirected_header("Referer", self.referer)
+            return request
+
+        def http_response(self, request, response):
+            self.referer = response.geturl()
+            return response
+
+        https_request = http_request
+        https_response = http_response
+
+    class HTTPResponseDebugProcessor(BaseHandler):
+        handler_order = 900  # before redirections, after everything else
+
+        def http_response(self, request, response):
+            if not hasattr(response, "seek"):
+                response = response_seek_wrapper(response)
+            info = getLogger("ClientCookie.http_responses").info
+            try:
+                info(response.read())
+            finally:
+                response.seek(0)
+            info("*****************************************************")
+            return response
+
+        https_response = http_response
+
+    class HTTPRedirectDebugProcessor(BaseHandler):
+        def http_request(self, request):
+            if hasattr(request, "redirect_dict"):
+                info = getLogger("ClientCookie.http_redirects").info
+                info("redirecting to %s", request.get_full_url())
+            return request
+
+    class HTTPRefreshProcessor(BaseHandler):
+        """Perform HTTP Refresh redirections.
+
+        Note that if a non-200 HTTP code has occurred (for example, a 30x
+        redirect), this processor will do nothing.
+
+        By default, only zero-time Refresh headers are redirected.  Use the
+        max_time attribute / constructor argument to allow Refresh with longer
+        pauses.  Use the honor_time attribute / constructor argument to control
+        whether the requested pause is honoured (with a time.sleep()) or
+        skipped in favour of immediate redirection.
+
+        Public attributes:
+
+        max_time: see above
+        honor_time: see above
+
+        """
+        handler_order = 1000
+
+        def __init__(self, max_time=0, honor_time=True):
+            self.max_time = max_time
+            self.honor_time = honor_time
+
+        def http_response(self, request, response):
+            code, msg, hdrs = response.code, response.msg, response.info()
+
+            if code == 200 and hdrs.has_key("refresh"):
+                refresh = getheaders(hdrs, "refresh")[0]
+                ii = string.find(refresh, ";")
+                if ii != -1:
+                    pause, newurl_spec = float(refresh[:ii]), refresh[ii+1:]
+                    jj = string.find(newurl_spec, "=")
+                    if jj != -1:
+                        key, newurl = newurl_spec[:jj], newurl_spec[jj+1:]
+                    if key.strip().lower() != "url":
+                        debug("bad Refresh header: %r" % refresh)
+                        return response
+                else:
+                    pause, newurl = float(refresh), response.geturl()
+                if (self.max_time is None) or (pause <= self.max_time):
+                    if pause > 1E-3 and self.honor_time:
+                        time.sleep(pause)
+                    hdrs["location"] = newurl
+                    # hardcoded http is NOT a bug
+                    response = self.parent.error(
+                        "http", request, response,
+                        "refresh", msg, hdrs)
+
+            return response
+
+        https_response = http_response
+
+    class HTTPErrorProcessor(BaseHandler):
+        """Process HTTP error responses.
+
+        The purpose of this handler is to to allow other response processors a
+        look-in by removing the call to parent.error() from
+        AbstractHTTPHandler.
+
+        For non-200 error codes, this just passes the job on to the
+        Handler.<proto>_error_<code> methods, via the OpenerDirector.error
+        method.  Eventually, urllib2.HTTPDefaultErrorHandler will raise an
+        HTTPError if no other handler handles the error.
+
+        """
+        handler_order = 1000  # after all other processors
+
+        def http_response(self, request, response):
+            code, msg, hdrs = response.code, response.msg, response.info()
+
+            if code != 200:
+                # hardcoded http is NOT a bug
+                response = self.parent.error(
+                    "http", request, response, code, msg, hdrs)
+
+            return response
+
+        https_response = http_response
+
+
+    class AbstractHTTPHandler(BaseHandler):
+
+        def __init__(self, debuglevel=0):
+            self._debuglevel = debuglevel
+
+        def set_http_debuglevel(self, level):
+            self._debuglevel = level
+
+        def do_request_(self, request):
+            host = request.get_host()
+            if not host:
+                raise URLError('no host given')
+
+            if request.has_data():  # POST
+                data = request.get_data()
+                if not request.has_header('Content-type'):
+                    request.add_unredirected_header(
+                        'Content-type',
+                        'application/x-www-form-urlencoded')
+
+            scheme, sel = urllib.splittype(request.get_selector())
+            sel_host, sel_path = urllib.splithost(sel)
+            if not request.has_header('Host'):
+                request.add_unredirected_header('Host', sel_host or host)
+            for name, value in self.parent.addheaders:
+                name = string.capitalize(name)
+                if not request.has_header(name):
+                    request.add_unredirected_header(name, value)
+
+            return request
+
+        def do_open(self, http_class, req):
+            """Return an addinfourl object for the request, using http_class.
+
+            http_class must implement the HTTPConnection API from httplib.
+            The addinfourl return value is a file-like object.  It also
+            has methods and attributes including:
+                - info(): return a mimetools.Message object for the headers
+                - geturl(): return the original request URL
+                - code: HTTP status code
+            """
+            host = req.get_host()
+            if not host:
+                raise URLError('no host given')
+
+            h = http_class(host) # will parse host:port
+            h.set_debuglevel(self._debuglevel)
+
+            headers = req.headers.copy()
+            headers.update(req.unredirected_hdrs)
+            # We want to make an HTTP/1.1 request, but the addinfourl
+            # class isn't prepared to deal with a persistent connection.
+            # It will try to read all remaining data from the socket,
+            # which will block while the server waits for the next request.
+            # So make sure the connection gets closed after the (only)
+            # request.
+            headers["Connection"] = "close"
+            try:
+                h.request(req.get_method(), req.get_selector(), req.data, headers)
+                r = h.getresponse()
+            except socket.error, err: # XXX what error?
+                raise URLError(err)
+
+            # Pick apart the HTTPResponse object to get the addinfourl
+            # object initialized properly.
+
+            # Wrap the HTTPResponse object in socket's file object adapter
+            # for Windows.  That adapter calls recv(), so delegate recv()
+            # to read().  This weird wrapping allows the returned object to
+            # have readline() and readlines() methods.
+
+            # XXX It might be better to extract the read buffering code
+            # out of socket._fileobject() and into a base class.
+
+            r.recv = r.read
+            fp = socket._fileobject(r, 'rb', -1)
+
+            resp = closeable_response(fp, r.msg, req.get_full_url(),
+                                      r.status, r.reason)
+            return resp
+
+
+    class HTTPHandler(AbstractHTTPHandler):
+        def http_open(self, req):
+            return self.do_open(httplib.HTTPConnection, req)
+
+        http_request = AbstractHTTPHandler.do_request_
+
+    if hasattr(httplib, 'HTTPS'):
+        class HTTPSHandler(AbstractHTTPHandler):
+            def https_open(self, req):
+                return self.do_open(httplib.HTTPSConnection, req)
+
+            https_request = AbstractHTTPHandler.do_request_
+
+##     class HTTPHandler(AbstractHTTPHandler):
+##         def http_open(self, req):
+##             return self.do_open(httplib.HTTP, req)
+
+##         http_request = AbstractHTTPHandler.do_request_
+
+##     if hasattr(httplib, 'HTTPS'):
+##         class HTTPSHandler(AbstractHTTPHandler):
+##             def https_open(self, req):
+##                 return self.do_open(httplib.HTTPS, req)
+
+##             https_request = AbstractHTTPHandler.do_request_
+
+    if int(10*float(urllib2.__version__[:3])) >= 24:
+        # urllib2 supports processors already
+        from _Opener import OpenerMixin
+        class OpenerDirector(urllib2.OpenerDirector, OpenerMixin):
+            pass
+    else:
+        from _Opener import OpenerDirector
+
+    class OpenerFactory:
+        """This class's interface is quite likely to change."""
+
+        default_classes = [
+            # handlers
+            urllib2.ProxyHandler,
+            urllib2.UnknownHandler,
+            HTTPHandler,  # from this module (derived from new AbstractHTTPHandler)
+            urllib2.HTTPDefaultErrorHandler,
+            HTTPRedirectHandler,  # from this module (bugfixed)
+            urllib2.FTPHandler,
+            urllib2.FileHandler,
+            # processors
+            HTTPRequestUpgradeProcessor,
+            #HTTPEquivProcessor,
+            #SeekableProcessor,
+            HTTPCookieProcessor,
+            #HTTPRefererProcessor,
+            #HTTPRefreshProcessor,
+            HTTPErrorProcessor
+            ]
+        handlers = []
+        replacement_handlers = []
+
+        def __init__(self, klass=OpenerDirector):
+            self.klass = klass
+
+        def build_opener(self, *handlers):
+            """Create an opener object from a list of handlers and processors.
+
+            The opener will use several default handlers and processors, including
+            support for HTTP and FTP.
+
+            If any of the handlers passed as arguments are subclasses of the
+            default handlers, the default handlers will not be used.
+
+            """
+            opener = self.klass()
+            default_classes = list(self.default_classes)
+            if hasattr(httplib, 'HTTPS'):
+                default_classes.append(HTTPSHandler)
+            skip = []
+            for klass in default_classes:
+                for check in handlers:
+                    if type(check) == types.ClassType:
+                        if issubclass(check, klass):
+                            skip.append(klass)
+                    elif type(check) == types.InstanceType:
+                        if isinstance(check, klass):
+                            skip.append(klass)
+            for klass in skip:
+                default_classes.remove(klass)
+
+            for klass in default_classes:
+                opener.add_handler(klass())
+            for h in handlers:
+                if type(h) == types.ClassType:
+                    h = h()
+                opener.add_handler(h)
+
+            return opener
+
+    build_opener = OpenerFactory().build_opener
+
+    _opener = None
+    urlopen_lock = _threading.Lock()
+    def urlopen(url, data=None):
+        global _opener
+        if _opener is None:
+            urlopen_lock.acquire()
+            try:
+                if _opener is None:
+                    _opener = build_opener()
+            finally:
+                urlopen_lock.release()
+        return _opener.open(url, data)
+
+    def urlretrieve(url, filename=None, reporthook=None, data=None):
+        global _opener
+        if _opener is None:
+            urlopen_lock.acquire()
+            try:
+                if _opener is None:
+                    _opener = build_opener()
+            finally:
+                urlopen_lock.release()
+        return _opener.retrieve(url, filename, reporthook, data)
+
+    def install_opener(opener):
+        global _opener
+        _opener = opener

Added: trunk/bigboard/libgmail/MANIFEST.in
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/MANIFEST.in	Mon May 12 18:42:13 2008
@@ -0,0 +1,7 @@
+include COPYING
+include README
+include CHANGELOG
+include gmail_transport.py
+include lgconstants.py
+include libgmail.py
+include setup.py
\ No newline at end of file

Added: trunk/bigboard/libgmail/README
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/README	Mon May 12 18:42:13 2008
@@ -0,0 +1,47 @@
+libgmail is licensed under the GPL.
+See the file named COPYING for more information.
+
+Please refer to the libgmail website or project page at sourceforge if
+you encounter problems using libgmail.
+http://libgmail.sf.net/
+http://sourceforge.net/projects/libgmail/
+
+You can contact us by email:
+libgmail-developer lists sf net,
+or, individually at
+stas AT linux DOT isbeter DOT nl
+wdaher AT mit DOT edu
+follower AT myrealbox DOT com
+
+-----------------------------------------------
+Possible usage:
+
+Run this:
+
+  python libgmail.py
+
+When you have the demos package installed you could do this:
+
+  python demos/archive.py
+
+or even this:
+
+  python demos/sendmsg.py <account> <to address> <subject> <body>
+
+or perhaps this:
+
+  python demos/gmailsmtp.py # (Then connect to SMTP proxy on local port 8025)
+
+or how about this:
+
+  python demos/gmailftpd.py # (Then connect to FTP proxy on local port 8021,
+                            #  after creating a label named 'ftp' and
+                            #  applying it to some messages with attachments.)
+
+or maybe this:
+
+  python demos/gmailpopd.py # (Then connect to POP3 proxy on local port 8110)
+
+for hours of fun!(*)
+
+(*) Note: Fun may not last for hours. Use at your own risk, blah, blah, etc...
\ No newline at end of file

Added: trunk/bigboard/libgmail/demos/COPYING
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/COPYING	Mon May 12 18:42:13 2008
@@ -0,0 +1,340 @@
+		    GNU GENERAL PUBLIC LICENSE
+		       Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.
+     59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+			    Preamble
+
+  The licenses for most software are designed to take away your
+freedom to share and change it.  By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users.  This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it.  (Some other Free Software Foundation software is covered by
+the GNU Library General Public License instead.)  You can apply it to
+your programs, too.
+
+  When we speak of free software, we are referring to freedom, not
+price.  Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+  To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+  For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have.  You must make sure that they, too, receive or can get the
+source code.  And you must show them these terms so they know their
+rights.
+
+  We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+  Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software.  If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+  Finally, any free program is threatened constantly by software
+patents.  We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary.  To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.
+
+		    GNU GENERAL PUBLIC LICENSE
+   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+  0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License.  The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language.  (Hereinafter, translation is included without limitation in
+the term "modification".)  Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope.  The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+  1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+  2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+    a) You must cause the modified files to carry prominent notices
+    stating that you changed the files and the date of any change.
+
+    b) You must cause any work that you distribute or publish, that in
+    whole or in part contains or is derived from the Program or any
+    part thereof, to be licensed as a whole at no charge to all third
+    parties under the terms of this License.
+
+    c) If the modified program normally reads commands interactively
+    when run, you must cause it, when started running for such
+    interactive use in the most ordinary way, to print or display an
+    announcement including an appropriate copyright notice and a
+    notice that there is no warranty (or else, saying that you provide
+    a warranty) and that users may redistribute the program under
+    these conditions, and telling the user how to view a copy of this
+    License.  (Exception: if the Program itself is interactive but
+    does not normally print such an announcement, your work based on
+    the Program is not required to print an announcement.)
+
+These requirements apply to the modified work as a whole.  If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works.  But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+  3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+    a) Accompany it with the complete corresponding machine-readable
+    source code, which must be distributed under the terms of Sections
+    1 and 2 above on a medium customarily used for software interchange; or,
+
+    b) Accompany it with a written offer, valid for at least three
+    years, to give any third party, for a charge no more than your
+    cost of physically performing source distribution, a complete
+    machine-readable copy of the corresponding source code, to be
+    distributed under the terms of Sections 1 and 2 above on a medium
+    customarily used for software interchange; or,
+
+    c) Accompany it with the information you received as to the offer
+    to distribute corresponding source code.  (This alternative is
+    allowed only for noncommercial distribution and only if you
+    received the program in object code or executable form with such
+    an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it.  For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable.  However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+  4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License.  Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+  5. You are not required to accept this License, since you have not
+signed it.  However, nothing else grants you permission to modify or
+distribute the Program or its derivative works.  These actions are
+prohibited by law if you do not accept this License.  Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+  6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions.  You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+  7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all.  For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices.  Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+  8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded.  In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+  9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time.  Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number.  If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation.  If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+  10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission.  For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this.  Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+			    NO WARRANTY
+
+  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+		     END OF TERMS AND CONDITIONS
+
+	    How to Apply These Terms to Your New Programs
+
+  If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+  To do so, attach the following notices to the program.  It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+    <one line to give the program's name and a brief idea of what it does.>
+    Copyright (C) <year>  <name of author>
+
+    This program is free software; you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation; either version 2 of the License, or
+    (at your option) any later version.
+
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+
+    You should have received a copy of the GNU General Public License
+    along with this program; if not, write to the Free Software
+    Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+    Gnomovision version 69, Copyright (C) year  name of author
+    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+    This is free software, and you are welcome to redistribute it
+    under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License.  Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary.  Here is a sample; alter the names:
+
+  Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+  `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+  <signature of Ty Coon>, 1 April 1989
+  Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs.  If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library.  If this is what you want to do, use the GNU Library General
+Public License instead of this License.

Added: trunk/bigboard/libgmail/demos/CVS/Entries
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/CVS/Entries	Mon May 12 18:42:13 2008
@@ -0,0 +1,16 @@
+/COPYING/1.1/Tue Aug 16 07:04:46 2005//
+/MakeTarBall.py/1.3/Tue Oct  9 01:22:31 2007//
+/README/1.1/Tue Aug 16 07:04:46 2005//
+/archive.py/1.9/Fri Sep 21 19:22:38 2007//
+/filelist/1.2/Tue Oct  9 01:22:31 2007//
+/folderlist/1.2/Tue Aug 16 07:31:06 2005//
+/gcp.py/1.2/Tue Aug 16 06:43:47 2005//
+/gmailftpd.py/1.6/Sun Sep 11 01:00:19 2005//
+/gmailpopd.py/1.5/Tue Aug 16 10:34:08 2005//
+/gmailsmtp.py/1.4/Tue Aug 16 06:43:47 2005//
+/readmail.py/1.2/Sun Oct  7 13:47:29 2007//
+/sendmsg.py/1.4/Sun Sep 18 18:41:48 2005//
+/test_fwd_attach.py/1.2/Tue Aug 16 06:43:47 2005//
+/test_notifier.py/1.2/Tue Aug 16 06:43:47 2005//
+/unreadmsgcount.py/1.2/Tue Aug 16 06:43:47 2005//
+D

Added: trunk/bigboard/libgmail/demos/CVS/Repository
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/CVS/Repository	Mon May 12 18:42:13 2008
@@ -0,0 +1 @@
+libgmail/demos

Added: trunk/bigboard/libgmail/demos/CVS/Root
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/CVS/Root	Mon May 12 18:42:13 2008
@@ -0,0 +1 @@
+:pserver:anonymous libgmail cvs sourceforge net:/cvsroot/libgmail

Added: trunk/bigboard/libgmail/demos/MakeTarBall.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/MakeTarBall.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,51 @@
+#!/usr/bin/env python
+
+# make tarball!
+VERSION = '0.3'
+PACKAGENAME = 'libgmail-docs_'
+import os
+
+print "\nCreate API docs"
+os.system('epydoc -o API ../libgmail.py')
+
+def cleanup(*args):
+    """Used by os.path.walk to traverse the tree and remove CVS dirs"""
+    if os.path.split(args[1])[1] == "CVS":
+        print "Remove ",args[1]
+        os.system('rm -r %s' % args[1])
+    
+filelist = open('filelist', 'r')
+folderlist = open('folderlist', 'r')
+myFiles = filelist.readlines()
+myFolders = folderlist.readlines()
+os.system('mkdir %s%s' % (PACKAGENAME,VERSION))
+for file in myFiles:
+    os.system('cp %s %s%s' % (file[:-1], PACKAGENAME,VERSION))
+
+for folder in myFolders:
+    os.system('mkdir %s%s/%s' % (PACKAGENAME,VERSION, folder[:-1]))
+    os.system('cp -r %s %s%s' % (folder[:-1],PACKAGENAME, VERSION))
+
+# removing the CVS stuff
+os.path.walk('%s%s' % (PACKAGENAME,VERSION),cleanup,None)
+
+print "\nCreate a GNU/Linux tarball..."
+try:
+    execString = 'tar -czf %s%s.tgz %s%s/' % (PACKAGENAME,VERSION,PACKAGENAME, VERSION)
+    print execString
+    os.system(execString)
+except Exception,info:
+    print info,"\nYou must have the tar package installed"
+else:
+    print "Done.\n"
+    
+print "Create a Windows compatible zipfile..."
+try:
+    execString = 'zip -rq %s%s.zip ./%s%s' % (PACKAGENAME,VERSION,PACKAGENAME, VERSION)
+    print execString
+    os.system(execString)
+except Exception,info:
+    print info,"\nYou must have the zip package installed."
+else:
+    print "Done\n"
+os.system('rm -rf %s%s' % (PACKAGENAME,VERSION))

Added: trunk/bigboard/libgmail/demos/README
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/README	Mon May 12 18:42:13 2008
@@ -0,0 +1,45 @@
+libgmail is licensed under the GPL.
+See the file named COPYING for more information.
+
+Please refer to the libgmail website or project page at sourceforge if
+you encouter problems using libgmail.
+http://libgmail.sf.net
+http://sourceforge.net/projects/libgmail/
+
+You can contact us by email:
+stas AT linux DOT isbeter DOT nl
+wdaher AT mit DOT edu
+follower AT myrealbox DOT com
+
+-----------------------------------------------
+Possible usage:
+
+Run this:
+
+  python libgmail.py
+
+or this:
+
+  python demos/archive.py
+
+or even this:
+
+  python demos/sendmsg.py <account> <to address> <subject> <body>
+
+or perhaps this:
+
+  python demos/gmailsmtp.py # (Then connect to SMTP proxy on local port 8025)
+
+or how about this:
+
+  python demos/gmailftpd.py # (Then connect to FTP proxy on local port 8021,
+                            #  after creating a label named 'ftp' and
+                            #  applying it to some messages with attachments.)
+
+or maybe this:
+
+  python demos/gmailpopd.py # (Then connect to POP3 proxy on local port 8110)
+
+for hours of fun!(*)
+
+(*) Note: Fun may not last for hours. Use at your own risk, blah, blah, etc...

Added: trunk/bigboard/libgmail/demos/archive.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/archive.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,95 @@
+#!/usr/bin/env python
+#
+# archive.py -- Demo to archive all threads in a Gmail folder
+#
+# $Revision: 1.9 $ ($Date: 2007/09/21 19:22:38 $)
+#
+# Author: follower myrealbox com
+#
+# License: GPL 2.0
+#
+import os
+import sys
+import logging
+import re
+import time
+
+# Allow us to run using installed `libgmail` or the one in parent directory.
+try:
+    import libgmail
+    logging.warn("Note: Using currently installed `libgmail` version.")
+except ImportError:
+    # Urghhh...
+    sys.path.insert(1,
+                    os.path.realpath(os.path.join(os.path.dirname(__file__),
+                                                  os.path.pardir)))
+
+    import libgmail
+
+    
+if __name__ == "__main__":
+    import sys
+    from getpass import getpass
+
+    try:
+        name = sys.argv[1]
+    except IndexError:
+        name = raw_input("Gmail account name: ")
+        
+    pw = getpass("Password: ")
+
+    ga = libgmail.GmailAccount(name, pw)
+
+    print "\nPlease wait, logging in..."
+
+    try:
+        ga.login()
+    except libgmail.GmailLoginFailure:
+        print "\nLogin failed. (Wrong username/password?)"
+    else:
+        print "Log in successful.\n"
+
+        searches = libgmail.STANDARD_FOLDERS + ga.getLabelNames()
+
+        while 1:
+            try:
+                print "Select folder or label to archive: (Ctrl-C to exit)"
+                print "Note: *All* pages of results will be archived."
+
+                for optionId, optionName in enumerate(searches):
+                    print "  %d. %s" % (optionId, optionName)
+
+                name = searches[int(raw_input("Choice: "))]
+
+                if name in libgmail.STANDARD_FOLDERS:
+                    result = ga.getMessagesByFolder(name, True)
+                else:
+                    result = ga.getMessagesByLabel(name, True)
+
+                print
+                from_re = re.compile('^(>*From )', re.MULTILINE)
+                if len(result):
+                    now = time.strftime("%Y-%m-%d_%H.%M.%S")
+                    mbox = open("archive-%s-%s.mbox" % (name, now), "w")
+                    try:
+                        for thread in result:
+                            print
+                            print thread.id, len(thread), thread.subject
+
+                            for msg in thread:
+                                print "  ", msg.id, msg.number, msg.subject
+                                mbox.write("From - Thu Jan 22 22:03:29 1998\n")
+                                source = msg.source.replace("\r","").lstrip()
+                                mbox.write(from_re.sub('>\\1', source))
+                                mbox.write("\n\n")
+                    finally:
+                        mbox.close()
+                else:
+                    print "No threads found in `%s`." % name
+                print
+                    
+            except KeyboardInterrupt:
+                break
+
+    print "\n\nDone."
+    

Added: trunk/bigboard/libgmail/demos/filelist
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/filelist	Mon May 12 18:42:13 2008
@@ -0,0 +1,12 @@
+archive.py
+gcp.py
+gmailftpd.py
+gmailpopd.py
+gmailsmtp.py
+readmail.py
+sendmsg.py
+test_fwd_attach.py
+test_notifier.py
+unreadmsgcount.py
+README
+COPYING

Added: trunk/bigboard/libgmail/demos/folderlist
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/folderlist	Mon May 12 18:42:13 2008
@@ -0,0 +1 @@
+API

Added: trunk/bigboard/libgmail/demos/gcp.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/gcp.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,67 @@
+#!/usr/bin/env python
+#
+# gcp.py -- Demo to copy a file to Gmail using libgmail
+#
+# $Revision: 1.2 $ ($Date: 2005/08/16 06:43:47 $)
+#
+# Author: follower myrealbox com
+#
+# License: GPL 2.0
+#
+import os
+import sys
+import logging
+
+# Allow us to run using installed `libgmail` or the one in parent directory.
+try:
+    import libgmail
+    logging.warn("Note: Using currently installed `libgmail` version.")
+except ImportError:
+    # Urghhh...
+    sys.path.insert(1,
+                    os.path.realpath(os.path.join(os.path.dirname(__file__),
+                                                  os.path.pardir)))
+
+    import libgmail
+
+    
+if __name__ == "__main__":
+    import sys
+    from getpass import getpass
+
+    # TODO: Allow copy from account.
+
+    try:
+        filename = sys.argv[1]
+        destination = sys.argv[2]
+    except IndexError:
+        print "Usage: %s <filename> <account>:[<label>/]" % sys.argv[0]
+        raise SystemExit
+
+    name, label = destination.split(":", 1)
+
+    if label.endswith("/"):
+        label = label[:-1]
+
+    if not label:
+        label = None
+        
+    pw = getpass("Password: ")
+
+    ga = libgmail.GmailAccount(name, pw)
+
+    print "\nPlease wait, logging in..."
+
+    try:
+        ga.login()
+    except libgmail.GmailLoginFailure:
+        print "\nLogin failed. (Wrong username/password?)"
+    else:
+        print "Log in successful.\n"
+
+        if ga.storeFile(filename, label=label):
+            print "File `%s` stored successfully in `%s`." % (filename, label)
+        else:
+            print "Could not store file."
+
+        print "Done."

Added: trunk/bigboard/libgmail/demos/gmailftpd.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/gmailftpd.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,386 @@
+#!/usr/bin/env python
+#
+# gmailftpd.py -- Demo to allow retrieval of attachments via FTP.
+#
+# $Revision: 1.6 $ ($Date: 2005/09/11 01:00:19 $)
+#
+# Author: follower myrealbox com
+#
+# License: Dual GPL 2.0 and PSF (This file only.)
+#
+#
+# Based on smtpd.py by Barry Warsaw <barry python org> (Thanks Barry!)
+#
+# Major rewrite of the datachannel handeling by Willy De la Court <wdl linux-lovers be>
+#
+# Note: Requires messages to be marked with a label named "ftp".
+#       (This requirement can be removed.)
+#
+# TODO: Handle duplicate file names...
+#
+
+import sys
+import os
+import re
+import time
+import socket
+import asyncore
+import asynchat
+import logging
+
+program = sys.argv[0]
+__version__ = 'Python Gmail FTP proxy version 0.0.3'
+
+# Allow us to run using installed `libgmail` or the one in parent directory.
+try:
+    import libgmail
+    logging.warn("Note: Using currently installed `libgmail` version.")
+except ImportError:
+    # Urghhh...
+    sys.path.insert(1,
+                    os.path.realpath(os.path.join(os.path.dirname(__file__),
+                                                  os.path.pardir)))
+
+    import libgmail
+
+
+class Devnull:
+    def write(self, msg): pass
+    def flush(self): pass
+
+
+DEBUGSTREAM = Devnull()
+NEWLINE = '\n'
+EMPTYSTRING = ''
+
+nextPort = 9021
+
+class FTPChannel(asynchat.async_chat):
+
+    def __init__(self, server, conn, addr):
+        asynchat.async_chat.__init__(self, conn)
+        self.__server = server
+        self.__conn = conn
+        self.__addr = addr
+        self.__line = []
+        self.__fqdn = socket.getfqdn()
+        self.__peer = conn.getpeername()
+        print >> DEBUGSTREAM, 'Peer:', repr(self.__peer)
+        self.push('220 %s %s' % (self.__fqdn, __version__))
+        self.set_terminator('\r\n')
+        self.my_type = "A"
+        self.my_cwd = ""
+        self.my_user = ""
+        self.filenames = {}
+        self._activeDataChannel = None
+
+
+    # Overrides base class for convenience
+    def push(self, msg):
+        asynchat.async_chat.push(self, msg + '\r\n')
+
+    # Implementation of base class abstract method
+    def collect_incoming_data(self, data):
+        self.__line.append(data)
+
+    # Implementation of base class abstract method
+    def found_terminator(self):
+        line = EMPTYSTRING.join(self.__line)
+        print >> DEBUGSTREAM, 'Data:', repr(line)
+        self.__line = []
+        if not line:
+            self.push('500 Error: bad syntax')
+            return
+        method = None
+        i = line.find(' ')
+        if i < 0:
+            command = line.upper()
+            arg = None
+        else:
+            command = line[:i].upper()
+            arg = line[i+1:].strip()
+        method = getattr(self, 'ftp_' + command, None)
+        if not method:
+            self.push('502 Error: command "%s" not implemented' % command)
+            return
+        method(arg)
+        return
+
+    def get_filelist(self):
+        """
+        Get the file list from GMail
+        """
+        r = self.ga.getMessagesByLabel('ftp')
+        for th in r:
+            for m in th:
+                for a in m.attachments:
+                    self.filenames[a.filename] = a
+
+    def ftp_USER(self, arg):
+        """
+        Process USER ftp command
+        """
+        if not arg:
+            self.push('501 Syntax: USER username')
+        else:
+            self.my_user = arg
+            self.push('331 Password required')
+
+    def ftp_PASS(self, arg = ''):
+        """
+        Process PASS ftp command
+        """
+        self.ga = libgmail.GmailAccount(self.my_user, arg)
+
+        try:
+            self.ga.login()
+        except libgmail.GmailLoginFailure:
+            self.push('530 Login failed. (Wrong username/password?)')
+        else:
+            self.push('230 User logged in')
+
+    def ftp_LIST(self, arg):
+        """
+        Process LIST ftp command
+        """
+        self.filenames = {}
+        self._activeDataChannel.cmd = "LIST " + str(arg)
+        self._activeDataChannel.handle_LIST()
+
+    def ftp_RNFR(self, arg):
+        """
+        Process RNFR ftp command
+        """
+        self.push('350 File exists, ready for destination name')
+
+    def ftp_RNTO(self, arg):
+        """
+        Process RNTO ftp command
+        """
+        self.push('250 RNTO command successful.')
+
+    def ftp_SIZE(self, arg):
+        """
+        Process SIZE ftp command
+        """
+        name_req = arg
+        if name_req[:1] == '/':
+           name_req = name_req[1:]
+        try:
+           response = "213 %d" % (self.filenames[name_req].filesize)
+        except:
+           self.push("550 %s: No such file or directory." % (name_req))
+        else:
+           self.push(response)
+
+    def ftp_RETR(self, arg):
+        """
+        Process RETR ftp command
+        """
+        self._activeDataChannel.cmd = "RETR " + str(arg)
+        self._activeDataChannel.handle_RETR()
+
+
+    def ftp_STOR(self, arg):
+        """
+        Process STORE ftp command
+        """
+        # TODO: Check this is legit, don't just copy & paste from RETR...
+        self._activeDataChannel.cmd = "STOR " + str(arg)
+        self._activeDataChannel.handle_STOR()
+
+
+    def ftp_PASV(self, arg):
+        """
+        Process PASV ftp command
+        """
+        # *** TODO: Don't allow non-binary file transfers here?
+        global nextPort
+        PORT = nextPort
+        nextPort += 1
+        ADDR = ('127.0.0.1', PORT)
+        self._activeDataChannel = DataChannel(ADDR, self)
+        self.push('227 =127,0,0,1,%d,%d' % (PORT / 256, PORT % 256))
+
+
+    def ftp_QUIT(self, arg):
+        """
+        Process QUIT ftp command
+        """
+        # args is ignored
+        self.push('221 Bye')
+        self.close_when_done()
+
+
+    def ftp_CWD(self, arg):
+        """
+        Process CWD ftp command
+        """
+        # TODO: Attach CWD (and other items) to channel...
+        self.my_cwd = arg
+        self.push('550 ' + self.my_cwd + ': No such file or directory.')
+
+    def ftp_PWD(self, arg):
+        """
+        Process PWD ftp command
+        """
+        self.push('257 "/" is current directory.')
+
+
+    def ftp_TYPE(self, arg):
+        """
+        Process TYPE ftp command
+        """
+        response = '200 OK'
+
+        if arg in ["A", "A N"]:
+            self.my_type = "A"
+        elif arg in ["I", "L 8"]:
+            self.my_type = "I"
+        else:
+            response = "504 Unsupported TYPE parameter"
+
+        self.push(response)
+
+
+import tempfile
+
+class DataChannel(asyncore.dispatcher):
+    """
+    """
+    def __init__(self, localaddr, ControlChannel):
+        self._ControlChannel = ControlChannel
+        self._localaddr = localaddr
+        asyncore.dispatcher.__init__(self)
+        self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
+        # try to re-use a server port if possible
+        self.set_reuse_addr()
+        self.bind(localaddr)
+        self.listen(5)
+        print >> DEBUGSTREAM, \
+              '%s started at %s\n\tLocal addr: %s\n' % (
+            self.__class__.__name__, time.ctime(time.time()),
+            localaddr)
+
+        self.cmd = ""
+
+    def handle_accept(self):
+        """
+        Start the DATA connection
+        """
+        conn, addr = self.accept()
+
+        self._ControlChannel.push('150 Opening data connection.')
+
+        self.conn = conn
+
+    def handle_LIST(self):
+        """
+        Send the data response for the LIST command
+        """
+        self._ControlChannel.get_filelist()
+        result = ""
+        for file in self._ControlChannel.filenames.keys():
+            result = result + "-rw-r--r--   1 %s %s %10d Jan  1  2000 %s\r\n" % (self._ControlChannel.my_user, self._ControlChannel.my_user, self._ControlChannel.filenames[file].filesize, self._ControlChannel.filenames[file].filename)
+        self.conn.sendall(result)
+        self._ControlChannel.push('226 Transfer complete.')
+        self.close()
+        self.conn.close()
+
+    def handle_RETR(self):
+        """
+        Send the file for the RETR command
+        """
+        if self._ControlChannel.my_type != "I":
+            self._ControlChannel.push('426 Only binary transfer mode is supported')
+            self.close()
+            self.conn.close()
+            return
+
+        name_req = self.cmd[5:]
+        # Remove leading /
+        if name_req[:1] == '/':
+            name_req = name_req[1:]
+        print >> DEBUGSTREAM, "Reading `%s`." % (name_req)
+        # check if the file exists
+        try:
+            name = self._ControlChannel.filenames[name_req].filename
+        except KeyError:
+            # if not the list is probably not read yet
+            self._ControlChannel.get_filelist()
+        # try again
+        try:
+            self.conn.sendall(self._ControlChannel.filenames[name_req].content)
+            response = '226 Transfer complete.'
+        except KeyError:
+            response = '550 ' + name_req + ': No such file or directory.'
+        self._ControlChannel.push(response)
+        self.close()
+        self.conn.close()
+
+    def handle_STOR(self):
+        """
+        Receive the file for the STOR command
+        """
+        if self._ControlChannel.my_type != "I":
+            self._ControlChannel.push('426 Only binary transfer mode is supported')
+            self.close()
+            self.conn.close()
+            return
+
+        buffer = ""
+        while True:
+            data = self.conn.recv(1024)
+            if not data:
+                break
+            buffer += data
+
+        filename = self.cmd[5:]
+        # Remove leading /
+        if filename[:1] == '/':
+            filename = filename[1:]
+        tempDir = tempfile.mkdtemp()
+        # Remove trailing '.part' KDE uses this to upload files
+        tempFileName = re.sub('\.part', '', filename)
+        tempFilePath = os.path.join(tempDir, tempFileName)
+        print >> DEBUGSTREAM, "Writing `%s` to `%s`." % (filename, tempFilePath)
+        open(tempFilePath, "wb").write(buffer)
+
+        self._ControlChannel.ga.storeFile(tempFilePath, "ftp")
+
+        os.remove(tempFilePath)
+        os.rmdir(tempDir)
+        self._ControlChannel.push('226 Transfer complete.')
+        self.close()
+        self.conn.close()
+
+class FTPServer(asyncore.dispatcher):
+    def __init__(self, localaddr):
+        self._localaddr = localaddr
+        asyncore.dispatcher.__init__(self)
+        self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
+        # try to re-use a server port if possible
+        self.set_reuse_addr()
+        self.bind(localaddr)
+        self.listen(5)
+        print >> DEBUGSTREAM, \
+              '%s started at %s\n\tLocal addr: %s\n' % (
+            self.__class__.__name__, time.ctime(time.time()),
+            localaddr)
+
+    def handle_accept(self):
+        conn, addr = self.accept()
+        print >> DEBUGSTREAM, 'Incoming connection from %s' % repr(addr)
+        channel = FTPChannel(self, conn, addr)
+
+
+
+if __name__ == '__main__':
+    DEBUGSTREAM = sys.stderr
+
+    proxy = FTPServer(('127.0.0.1', 8021))
+
+    try:
+        asyncore.loop()
+    except KeyboardInterrupt:
+        pass

Added: trunk/bigboard/libgmail/demos/gmailpopd.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/gmailpopd.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,302 @@
+#!/usr/bin/env python
+#
+# gmailpopd.py -- Demo to provide POP3 proxy for Gmail message retrieval..
+#
+# $Revision: 1.5 $ ($Date: 2005/08/16 10:34:08 $)
+#
+# Author: follower myrealbox com
+#
+# License: Dual GPL 2.0 and PSF (This file only.)
+#
+#
+# Based on smtpd.py by Barry Warsaw <barry python org> (Thanks Barry!)
+#
+## Applied the Debian patch bug report #277310 --SZ--
+
+import sys
+import os
+import time
+import socket
+import asyncore
+import asynchat
+
+program = sys.argv[0]
+__version__ = 'Python Gmail POP3 proxy version 0.0.1'
+
+# Allow us to run using installed `libgmail` or the one in parent directory.
+try:
+    import libgmail
+    logging.warn("Note: Using currently installed `libgmail` version.")
+except ImportError:
+    # Urghhh...
+    sys.path.insert(1,
+                    os.path.realpath(os.path.join(os.path.dirname(__file__),
+                                                  os.path.pardir)))
+
+    import libgmail
+
+
+from libgmail import U_AS_SUBSET_UNREAD
+
+
+class Devnull:
+    def write(self, msg): pass
+    def flush(self): pass
+
+
+DEBUGSTREAM = Devnull()
+NEWLINE = '\n'
+EMPTYSTRING = ''
+
+
+# TODO: Get rid of this global stuff...
+my_user = ""
+snapshot = None # Account snapshot...
+
+class GmailAccountSnapshot:
+    """
+    """
+
+    def __init__(self, ga):
+        """
+        """
+        self.account = ga
+        # TODO: Work out at what stage messages get marked as 'read'.
+        #       (as I think of it, it happens when I retrieve the
+        #        messages in the threads, should really preserve read/unread
+        #        state then.)
+        # TODO: Fix this so it does not retrieve messages that have already
+        #       been read. ("unread" is a property of thread in this case?)
+        #       Is this even possible without caching stuff ourselves,
+        #       maybe use "archive" as the equivalent of read?
+        self._unreadThreads = ga.getMessagesByQuery("is:" + U_AS_SUBSET_UNREAD,
+                                                    True)#TODO:True as default?
+        self.unreadMsgs = []
+        for thread in self._unreadThreads:
+            for msg in thread:
+                self.unreadMsgs.append(msg)
+
+
+    def retrieveMessage(self, msgNumber, bodyLines = None):
+        """
+
+        Returns an array of lines... (TODO: Decide if we want this.)        
+        """
+        # TODO: Check request is in range...
+        # TODO: Don't retrieve all of the message, just what's needed.
+        msgContent = self.unreadMsgs[msgNumber].source
+
+        msgContent = _massage(msgContent)# TODO: Remove this...
+
+        msgLines = msgContent.split("\r\n")
+
+        if bodyLines is not None:
+            blankIndex = msgLines.index("") # Blank line between header & body.
+            msgLines = msgLines[:blankIndex + 1 + bodyLines]
+
+        return msgLines            
+
+                
+class POPChannel(asynchat.async_chat):
+
+    def __init__(self, server, conn, addr):
+        asynchat.async_chat.__init__(self, conn)
+        self.__server = server
+        self.__conn = conn
+        self.__addr = addr
+        self.__line = []
+        self.__fqdn = socket.getfqdn()
+        self.__peer = conn.getpeername()
+        print >> DEBUGSTREAM, 'Peer:', repr(self.__peer)
+        self.push('+OK %s %s' % (self.__fqdn, __version__))
+        self.set_terminator('\r\n')
+
+        self._activeDataChannel = None
+        
+
+    # Overrides base class for convenience
+    def push(self, msg):
+        asynchat.async_chat.push(self, msg + '\r\n')
+
+    # Implementation of base class abstract method
+    def collect_incoming_data(self, data):
+        self.__line.append(data)
+
+    # Implementation of base class abstract method
+    def found_terminator(self):
+        line = EMPTYSTRING.join(self.__line)
+        print >> DEBUGSTREAM, 'Data:', repr(line)
+        self.__line = []
+        if not line:
+            self.push('500 Error: bad syntax')
+            return
+        method = None
+        i = line.find(' ')
+        if i < 0:
+            command = line.upper()
+            arg = None
+        else:
+            command = line[:i].upper()
+            arg = line[i+1:].strip()
+        method = getattr(self, 'pop_' + command, None)
+        if not method:
+            self.push('-ERR Error : command "%s" not implemented' % command)
+            return
+        method(arg)
+        return
+
+
+    def pop_USER(self, arg):
+        if not arg:
+            self.push('-ERR: Syntax: USER username')
+        else:
+            global my_user
+            my_user = arg
+            self.push('+OK Password required')
+
+
+    def pop_PASS(self, arg = ''):
+        """
+        """
+        ga = libgmail.GmailAccount(my_user, arg)
+
+        try:
+            ga.login()
+        except libgmail.GmailLoginFailure:
+            self.push('-ERR Login failed. (Wrong username/password?)')
+        else:
+            # For the moment this is our form of "locking the maildrop".
+            global snapshot
+            snapshot = GmailAccountSnapshot(ga)
+            
+            self.push('+OK User logged in')
+
+
+    def pop_STAT(self, arg):
+        """
+        """
+        # We define "Mail Drop" as being unread messages.
+        # TODO: Handle presenting all messages using read=deleted approach
+        #       or would it be better to be read=archived?
+        
+        # We just use a dummy mail drop size here at present, hope it causes
+        # no problems...
+        # TODO: Determine actual drop size... (i.e. always download msgs)
+        mailDropSize = 1
+        
+        self.push('+OK %d %d' % (len(snapshot.unreadMsgs), mailDropSize))
+
+
+    def pop_LIST(self, arg):
+        """
+        """
+        DUMMY_MSG_SIZE = 1 # TODO: Determine actual message size.
+        msgCount = len(snapshot.unreadMsgs)
+        self.push('+OK')
+        if not arg:
+            # TODO: Change all of this to operate on an account snapshot?
+            for msgIdx in range(1, msgCount + 1):
+                self.push('%d %d' % (msgIdx, DUMMY_MSG_SIZE))
+        else:
+            try:
+                arg = int(arg)
+            except:
+                arg = -1
+            if 0 < arg <= msgCount:
+                self.push('%d %d' % (arg, DUMMY_MSG_SIZE))
+            else:
+                self.push("-ERR no such message, only %d messages in maildrop"
+                          % msgCount)
+        self.push(".")
+
+    def pop_RETR(self, arg):
+        """
+        """
+        if not arg:
+            self.push('-ERR: Syntax: RETR msg')
+        else:
+            # TODO: Check request is in range...
+            msgNumber = int(arg) - 1 # Argument is 1 based, sequence is 0 based
+            
+            self.push('+OK')
+
+            for msgLine in byteStuff(snapshot.retrieveMessage(msgNumber)):
+                self.push(msgLine)
+
+            self.push('.') # TODO: Make constant...
+
+
+    def pop_TOP(self, arg):
+        """
+        """
+        if not arg:
+            self.push('-ERR: Syntax: RETR msg')
+        else:
+            msgNumber, bodyLines = arg.split(" ")
+            # TODO: Check request is in range...
+            msgNumber = int(msgNumber) - 1 # Argument is 1 based, sequence is 0 based
+            bodyLines = int(bodyLines)
+            
+            self.push('+OK')
+
+            for msgLine in byteStuff(snapshot.retrieveMessage(msgNumber, bodyLines)):
+                self.push(msgLine)
+
+            self.push('.') # TODO: Make constant...
+
+    
+    def pop_QUIT(self, arg):
+        # args is ignored
+        self.push('+OK Goodbye')
+        self.close_when_done()
+
+
+def byteStuff(lines):
+    """
+    """
+    for line in lines:
+        if line.startswith("."):
+            line = "." + line
+        yield line
+
+
+def _massage(msgContent):
+    """
+    """
+    # TODO: Put this message massaging in `GmailMessage.source`
+    #       and standardise how message ends? (e.g. '\r\n' not '\n')
+    msgContent = msgContent.lstrip()
+    msgContent += "\r\n"
+    return msgContent
+
+
+class POP3Proxy(asyncore.dispatcher):
+    def __init__(self, localaddr):
+        self._localaddr = localaddr
+        asyncore.dispatcher.__init__(self)
+        self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
+        # try to re-use a server port if possible
+        self.set_reuse_addr()
+        self.bind(localaddr)
+        self.listen(5)
+        print >> DEBUGSTREAM, \
+              '%s started at %s\n\tLocal addr: %s\n' % (
+            self.__class__.__name__, time.ctime(time.time()),
+            localaddr)
+
+    def handle_accept(self):
+        conn, addr = self.accept()
+        print >> DEBUGSTREAM, 'Incoming connection from %s' % repr(addr)
+        channel = POPChannel(self, conn, addr)
+
+        
+
+if __name__ == '__main__':
+    DEBUGSTREAM = sys.stderr
+    
+    proxy = POP3Proxy(('127.0.0.1', 8110))
+
+    try:
+        asyncore.loop()
+    except KeyboardInterrupt:
+        pass

Added: trunk/bigboard/libgmail/demos/gmailsmtp.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/gmailsmtp.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,127 @@
+#!/usr/bin/env python
+#
+# gmailsmtp.py -- Demo to allow smtp delivery via Gmail
+#
+# $Revision: 1.4 $ ($Date: 2005/08/16 06:43:47 $)
+#
+# Author: follower myrealbox com
+#
+# License: GPL 2.0
+#
+
+import os
+import sys
+import email
+import base64
+import asyncore
+
+import smtpd
+
+# Allow us to run using installed `libgmail` or the one in parent directory.
+try:
+    import libgmail
+    logging.warn("Note: Using currently installed `libgmail` version.")
+except ImportError:
+    # Urghhh...
+    sys.path.insert(1,
+                    os.path.realpath(os.path.join(os.path.dirname(__file__),
+                                                  os.path.pardir)))
+
+    import libgmail
+
+
+
+ga = None
+
+class GmailSmtpProxy(smtpd.SMTPServer):
+    """
+    """
+
+    def process_message(self, peer, mailfrom, rcpttos, data):
+        """
+        """
+        result = None
+
+        body = ""
+        attachments = []
+        
+        msg = email.message_from_string(data)
+
+        #import pdb; pdb.set_trace()
+
+        # Handle attachments, if present.
+        if msg.is_multipart():
+            for part in msg.get_payload():
+                if part.get_content_type() == "text/plain":
+                    # TODO: Do we need to handle "message/rfc822" too?
+                    body = part.get_payload()
+                else:
+                    attachments.append(part)
+        else:
+            body = msg.get_payload()
+
+        gmsg = libgmail.GmailComposedMessage(to = msg["To"],
+                                             subject = msg["Subject"],
+                                             body = body,
+                                             files = attachments)
+
+        # Don't drop connection until we know we delivered...
+        if not ga.sendMessage(gmsg):
+            result = "Could not deliver."
+
+        return result
+
+
+    def handle_accept(self):
+        conn, addr = self.accept()
+        print >> smtpd.DEBUGSTREAM, 'Incoming connection from %s' % repr(addr)
+        channel = ESMTPChannel(self, conn, addr)
+
+
+class ESMTPChannel(smtpd.SMTPChannel):
+    """
+    """
+    
+    def smtp_EHLO(self, arg):
+        if not arg:
+            self.push('501 Syntax: EHLO hostname')
+            return
+##        if self.__greeting:
+        if self._SMTPChannel__greeting:
+            self.push('503 Duplicate HELO/EHLO')
+        else:
+##             self.__greeting = arg
+##             self.push('250 %s' % self.__fqdn)
+            self._SMTPChannel__greeting = arg
+            self.push('250-%s' % self._SMTPChannel__fqdn)
+            self.push('250 AUTH PLAIN')
+
+
+    def smtp_AUTH(self, arg):
+        """
+        """
+        kind, data = arg.split(" ")
+        # TODO: Ensure kind == "PLAIN"
+        # TODO: Support "LOGIN" (required by Outlook?) <http://www.technoids.org/saslmech.html>
+
+        data = base64.decodestring(data)[1:]
+        user, pw = data.split("\x00")
+
+        global ga
+        ga = libgmail.GmailAccount(user, pw)
+        
+        try:
+            ga.login()
+        except libgmail.GmailLoginFailure:
+            self.push("535 Authorization failed")
+        else:
+            self.push('235 Ok')
+
+
+if __name__ == "__main__":
+
+    #smtpd.DEBUGSTREAM = sys.stderr
+
+    server = GmailSmtpProxy(("localhost", 8025), None)
+
+    asyncore.loop()

Added: trunk/bigboard/libgmail/demos/readmail.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/readmail.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,60 @@
+#!/usr/bin/env python
+'''
+readmail.py -- Demo to read all messages in gmail account for folders
+License: GPL 2.0
+'''
+
+import sys
+from getpass import getpass
+import libgmail
+
+if __name__ == "__main__":
+    try:
+        name = sys.argv[1]
+    except IndexError:
+        name = raw_input("Gmail account name: ")
+        
+    pw = getpass("Password: ")
+
+    ga = libgmail.GmailAccount(name, pw)
+
+    print "\nPlease wait, logging in..."
+
+    try:
+        ga.login()
+    except libgmail.GmailLoginFailure,e:
+        print "\nLogin failed. (%s)" % e.message
+    else:
+        print "Login successful.\n"
+
+    FOLDER_list = {'U_INBOX_SEARCH' : 'inbox',
+                   'U_STARRED_SEARCH' : 'starred',
+                   'U_ALL_SEARCH' : 'all',
+                   'U_DRAFTS_SEARCH' : 'drafts' ,
+                   'U_SENT_SEARCH' : 'sent',
+                   'U_SPAM_SEARCH' : 'spam',
+                   }
+
+    FOLDER_list = raw_input('Choose a folder (inbox, starred, all, drafts, sent, spam): ')
+    folder = ga.getMessagesByFolder(FOLDER_list)
+
+    for thread in folder:
+        print thread.id, len(thread), thread.subject
+        choice = raw_input('Read this message? [y/n]: ')
+        try:
+            if choice == 'y':
+                for msg in thread:
+                    print "  ", msg.id, msg.number, msg.subject
+                    # TODO: print compact header
+                    # header = ['From', 'Date', 'Subject']
+                    # for k in header:
+                    #    print k,':',msg.source[k]
+                    print msg.source
+            elif choice =='n':
+                pass
+            else:
+                print '\nInput certain code, next message...\n'
+        except KeyboardInterrupt:
+            break
+            
+    print "\n\nDone."

Added: trunk/bigboard/libgmail/demos/sendmsg.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/sendmsg.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,62 @@
+#!/usr/bin/env python
+#
+# sendmsg.py -- Demo to send a message via Gmail using libgmail
+#
+# $Revision: 1.4 $ ($Date: 2005/09/18 18:41:48 $)
+#
+# Author: follower myrealbox com
+#
+# License: GPL 2.0
+#
+import os
+import sys
+import logging
+
+# Allow us to run using installed `libgmail` or the one in parent directory.
+try:
+    import libgmail
+    ## Wouldn't this the preffered way?
+    ## We shouldn't raise a warning about a normal import
+    ##logging.warn("Note: Using currently installed `libgmail` version.")
+except ImportError:
+    # Urghhh...
+    sys.path.insert(1,
+                    os.path.realpath(os.path.join(os.path.dirname(__file__),
+                                                  os.path.pardir)))
+
+    import libgmail
+
+    
+if __name__ == "__main__":
+    import sys
+    from getpass import getpass
+
+    try:
+        name = sys.argv[1]
+        to = sys.argv[2]
+        subject = sys.argv[3]
+        msg = sys.argv[4]
+    except IndexError:
+        print "Usage: %s <account> <to address> <subject> <body>" % sys.argv[0]
+        raise SystemExit
+        
+    pw = getpass("Password: ")
+
+    ga = libgmail.GmailAccount(name, pw)
+
+    print "\nPlease wait, logging in..."
+
+    try:
+        ga.login()
+    except libgmail.GmailLoginFailure:
+        print "\nLogin failed. (Wrong username/password?)"
+    else:
+        print "Log in successful.\n"
+        gmsg = libgmail.GmailComposedMessage(to, subject, msg)
+
+        if ga.sendMessage(gmsg):
+            print "Message sent `%s` successfully." % subject
+        else:
+            print "Could not send message."
+
+        print "Done."

Added: trunk/bigboard/libgmail/demos/test_fwd_attach.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/test_fwd_attach.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,58 @@
+#!/usr/bin/env python
+
+#
+# Usage: test_fwd_attach.py <account> <password> <recipient> <subject>
+#
+
+# This example forwards the first attachment from the search for "<subject>"
+# to the recipient.
+#
+
+import os
+import sys
+
+# Allow us to run using installed `libgmail` or the one in parent directory.
+try:
+    import libgmail
+    logging.warn("Note: Using currently installed `libgmail` version.")
+except ImportError:
+    # Urghhh...
+    sys.path.insert(1,
+                    os.path.realpath(os.path.join(os.path.dirname(__file__),
+                                                  os.path.pardir)))
+
+    import libgmail
+
+account, pw, recipient, searchSubject = sys.argv[1:]
+
+ga = libgmail.GmailAccount(account, pw)
+ga.login()
+
+sr = ga.getMessagesByQuery("subject:%s" % searchSubject)
+
+attachmentId = None
+
+for thread in sr:
+    for msg in thread:
+        if msg.attachments:
+            attachmentId = msg.attachments[0]._fullId
+            break # Just use the first result.
+
+if not attachmentId:
+    print "No attachment found."
+    raise SystemExit
+
+
+cm = libgmail.GmailComposedMessage(to=recipient,
+                                   subject="File attachment from: %s" % msg.subject,
+                                   body="body")
+
+# Note: At present we can only have one forwarded attachment because
+#       we're using a dictionary for the parameters, and all attachments
+#       have the field name "attach".
+# TODO: Allow multiple forwarded attachments. (Probably by adding to the
+#       `_paramsToMime` function, although that's kinda hacky.)
+if ga.sendMessage(cm, _extraParams = {'attach': attachmentId}):
+    print "Succeeded."
+else:
+    print "Failed."

Added: trunk/bigboard/libgmail/demos/test_notifier.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/test_notifier.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,312 @@
+#!/usr/bin/env python
+
+#
+# Rough first draft code to use "official" Gmail Notifier protocol
+#
+# Author: follower myrealbox com
+#
+# License: GPL 2.0
+# 
+# Obviously this all needs to be turned into something state-machiney
+# eventually.
+#
+# ObBlah: This program is for educational or interoperability purposes.
+# 
+
+import os
+import sys
+
+# Allow us to run using installed `libgmail` or the one in parent directory.
+try:
+    import libgmail
+    logging.warn("Note: Using currently installed `libgmail` version.")
+except ImportError:
+    # Urghhh...
+    sys.path.insert(1,
+                    os.path.realpath(os.path.join(os.path.dirname(__file__),
+                                                  os.path.pardir)))
+
+    import libgmail
+
+r = '\n\x82\x02\x10\x93\xdd\xf8\xab\x87\x99\xa7\xf5\x0f\x18\xc9\x8c\xb2\xce\xea\x1f\x82\x01\x04^all\x82\x01\x02^f\x82\x01\x02^i\x82\x01\x02^u\x92\x01\x1e\n\x18\n\x12xxxxxxxx gmail com\x12\x02me\x10\x01\x18\x01\x98\x01\x02\xa2\x01\x97\x01[Test] This is a really really really really really really really really blah blah blah blah long subject line blah blah 123456789012345678901234567890\xaa\x01\x16So, did you see it all\xb8\x01\x01\n\xfa\x01\x10\xd0\x8a\xb8\x97\xf5\xcd\xa4\xf5\x0f\x18\xbd\xea\x9b\xc9\xea\x1f\x82\x01\x04^all\x82\x01\x02^i\x82\x01\x02^u\x92\x01,\n&\n\x18gmail-noreply google com\x12\nGmail Team\x10\x01\x18\x01\x98\x01\x02\xa2\x014Xxxxxx Xxxxxxx has accepted your invitation to Gmail\xaa\x01iXxxxxx Xxxxxxx has accepted your invitation to Gmail and has chosen the brand new address xxxxxx &hellip;\xb8\x01\x01\x88\x01\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
 x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
+
+from pprint import pprint
+#pprint(r.split("\x01"))
+
+from cStringIO import StringIO
+
+
+def _getCode(s):
+    """
+    """
+    code = ord(s.read(1))
+    print "code:", hex(code)
+
+    assert(s.read(1) == "\x01")
+
+    return code
+
+
+def _getNextBytes(s, nextReadCount = 0):
+    """
+    """
+    if not nextReadCount:
+        nextReadCount = ord(s.read(1))
+        
+    bytes = s.read(nextReadCount)
+    print "bytes:", repr(bytes),
+
+    return bytes
+    
+    
+
+def parseThreadData(s, obj):
+    """
+    """
+
+    # Data header
+    # Unknown initial bytes--maybe message Id?
+    ##nextReadCount = 19 # This does not seem to be consistent...
+    # TODO: Find out why.
+    #
+    # Example:
+    #
+    ## 0x82 02 10 93 dd f8 ab 87 99 a7 f5 0f 18 c9 8c b2 ce ea 1f  code: 0x82
+    ## 0xfa 01 10 d0 8a b8 97 f5 cd a4 f5 0f 18 bd ea 9b c9 ea 1f  code: 0x82
+    ## 0x7f 10 b8 d5 a2 bf b6 b7 a4 f5 0f 18 f8 ed ee c8 ea 1f  code: 0x82
+    ## 0x83 01 10 de 8d d5 8f d6 b3 a4 f5 0f 18 80 ad e7 c8 ea 1f  code: 0x82
+    ## 0x89 01 10 e0 be 9e cb f3 a8 a4 f5 0f 18 98 e7 d1 c8 ea 1f  code: 0x82
+    ## 0x96 01 10 95 d4 e3 d8 d6 a7 a4 f5 0f 18 d8 ae cf c8 ea 1f  code: 0x82
+    ## 0x9e 02 10 94 ba be c9 9b a7 a2 f5 0f 18 a2 b5 ce c4 ea 1f  code: 0x82
+    ## 0xc5 01 10 f0 94 d6 dd fb 94 a2 f5 0f 18 87 f6 a9 c4 ea 1f  code: 0x82
+    ## 0xf5 01 10 e5 f3 b8 97 cd ab a0 f5 0f 18 9b 99 d7 c0 ea 1f  code: 0x82
+    ## 0xf1 01 10 e2 b2 c9 ed f3 96 9b f5 0f 18 87 e7 ad b6 ea 1f  code: 0x82
+    ## 0xaa 02 10 8a 8e ab f9 e7 fd 93 f5 0f 18 a5 ce fb a7 ea 1f  code: 0x82
+    ## 0x81 02 10 a8 aa f4 f2 80 a7 df f4 0f 18 d0 82 ce be e9 1f  code: 0x82
+    ## 0x92 02 10 a0 ae 98 c6 a5 d3 aa f4 0f 18 b8 cb a6 d5 e8 1f  code: 0x82
+    ## 0xf8 01 10 8e d5 bf 98 f9 a0 a1 f4 0f 18 81 f2 c1 c2 e8 1f  code: 0x82
+    ## 0x9c 02 10 be f1 c5 e6 c9 e5 9a f4 0f 18 88 92 cb b5 e8 1f  code: 0x82
+    ## 0x86 02 10 e6 bc 99 ca a0 e5 9a f4 0f 18 f8 c3 ca b5 e8 1f  code: 0x82
+    ## 0x87 01 10 91 ce 8d e8 90 96 9a f4 0f 18 90 a2 ac b4 e8 1f  code: 0x82
+    ## 0xd3 01 10 be f7 b9 8b 81 b2 99 f4 0f 18 a8 81 e4 b2 e8 1f  code: 0x82
+    ## 0xf3 01 10 e6 df b1 c3 ae 9b 97 f4 0f 18 d7 dc b6 ae e8 1f  code: 0x82
+    ## 0xf1 01 10 88 95 e0 8f d9 d0 90 f4 0f 18 c4 b0 a1 a1 e8 1f  code: 0x82
+    ## 0xef 01 10 cc e0 f9 92 b2 da 8f f4 0f 18 f1 e6 b4 9f e8 1f  code: 0x82
+    ## 0xf0 01 10 ac b5 b6 fe af ec 86 f4 0f 18 d3 dd d8 8d e8 1f  code: 0x82
+    ## 0x7b 10 fc 82 eb e0 85 a6 86 f4 0f 18 b7 8a cc 8c e8 1f  code: 0x82
+    ## 0xa5 01 10 e3 f0 87 db f6 91 86 f4 0f 18 a4 ee a3 8c e8 1f  code: 0x82
+    ## 0x73 10 dd a8 e9 b5 ee ea 85 f4 0f 18 ca dd d5 8b e8 1f  code: 0x82
+
+    byteString = "0x"
+    while True:
+        # Skip unknown bytes (Note: This method probably isn't 100% reliable.)
+        # TODO: Work out what they are--there are some similarities.
+        #       Guess is date/time/id?
+        byte = s.read(1)
+        byteString += "%02x " % ord(byte)
+        if byte == "\x1f":
+            print byteString,
+            code = _getCode(s)
+            break
+
+    while code != 0x92:
+
+        bytes = _getNextBytes(s) ##, nextReadCount)
+        code = _getCode(s)
+
+        ##if code == 0x92:
+        ##    break
+
+    fromCount = 0
+    while True:
+        fromCount +=1
+        # Unknown, time/date?
+        print "Hard-coded read."
+        bytes = _getNextBytes(s, 4)
+        print
+            
+        # From
+        bytes = _getNextBytes(s)
+        print
+
+        assert(s.read(1) == "\x12")
+
+        bytes = _getNextBytes(s)
+        print
+
+        # --- This isn't right/or is messy... ----
+        # 0x10 == From?
+        # 0x18 == To?
+        code = _getCode(s)
+        assert((code == 0x10) or (code == 0x18)
+               or (code == 0x98) or (code == 0x92))
+
+        if code == 0x98:
+            break
+
+        if code == 0x92:
+            continue
+
+        byte = _getCode(s)
+        if byte == 0x18:
+            byte = _getCode(s)
+
+        if byte == 0x98:
+            break
+        else:
+            assert(byte == 0x92)
+        # ----------------
+
+    print "fromCount:", fromCount
+        
+        
+    
+        #elif code == 0x10:
+        #    break
+
+        ##nextReadCount = ord(s.read(1))
+
+        ##print "next:", nextReadCount
+
+
+    # Unknown
+    for nextReadCount in [2]:
+        bytes = _getNextBytes(s, nextReadCount)
+        print
+        
+        assert(s.read(1) == "\x01")
+
+    # Subject
+    # Extra long (> n, where n = ???) subjects have form:
+    #    length, 0x01, subject
+    #
+    # Shorter length subject have form:
+    #    length, subject
+    #
+    # TODO: Determine what happens when length > 255?
+    nextReadCount = ord(s.read(1))
+
+    if s.read(1) != "\x01":
+        s.seek(-1, 1)
+
+    bytes = _getNextBytes(s, nextReadCount)
+    code = _getCode(s)
+
+    obj.subject = bytes
+
+    # Message snippet
+    bytes = _getNextBytes(s)
+    code = _getCode(s)
+
+    obj.snippet = bytes
+
+    # End of message data
+    #assert(s.read(1) == "\x01")
+    threadMsgCount = ord(s.read(1))
+    print "threadMsgCount", threadMsgCount
+    # TODO: Find some way to make sure this is true...
+
+    
+
+
+class GmailNotifierResponse:
+    """
+    """
+
+    def __init__(self, responseData):
+        """
+        """
+
+
+# TODO: Merge this with GmailThread?
+class GmailNotifierThread:
+    """
+    """
+
+    def __init__(self, threadData):
+        """
+        """
+        # TODO: Move this to method of this object?
+        parseThreadData(threadData, self)
+    
+
+if __name__ == "__main__":
+    # Change the `0` to a `1` to retrieve a live response.
+    if 0:
+    
+        import sys
+        from getpass import getpass
+
+        try:
+            name = sys.argv[1]
+        except IndexError:
+            name = raw_input("Gmail account name: ")
+
+        pw = getpass("Password: ")
+
+        ga = libgmail.GmailAccount(name, pw)
+
+        print "\nPlease wait, logging in..."
+
+        #import pdb; pdb.set_trace()
+        ga.login()
+
+        print "Log in successful.\n"
+
+        r = ga._retrievePage("https://gmail.google.com/gmail?ui=pb&q=label:^i%20label:^u";)
+        #r = ga._retrievePage("https://gmail.google.com/gmail?ui=pb&q=label:^all";)
+
+    #print repr(r)
+    pprint(r.split("\x01"))
+
+    s = StringIO(r)
+
+    numMsgs = 0
+
+    threads = []
+
+    while True:
+        code = ord(s.read(1))
+
+        if code == 0x0a:
+            numMsgs += 1
+            threads.append(GmailNotifierThread(s))
+        elif code == 0x88:
+            # trailer
+            assert(s.read(1) == "\x01")
+            # If there are no messages there's no trailing count?
+            msgCount = ord(s.read(1)) # TODO: What about count > 255?
+            print "Messages:", msgCount
+            print "Messages found:", numMsgs
+
+            # Maximum of 30 Messages?
+            try:
+                assert(msgCount == numMsgs)
+                print "All messages retrieved."
+            except AssertionError:
+                print "Not all messages retrieved."
+                assert(msgCount > 30)
+                assert(numMsgs == 30)
+
+            padding = s.read() # Padding ensures data length is power of 2.
+            print "padding bytes:", len(padding)
+            try:
+                assert(len([byte for byte in padding if byte != "\x00"]) == 0)
+            except AssertionError:
+                # What is the extra data value? ("more", "not all shown"?)
+                print "AssertionError: Not all padding blank."
+                print repr(padding[:10])
+
+            break
+        else:
+            raise Exception("Unknown code")
+
+    print "Total length:", len(r)
+
+    print
+
+    for th in threads:
+        print th.subject
+        print th.snippet
+        print
+        

Added: trunk/bigboard/libgmail/demos/unreadmsgcount.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/demos/unreadmsgcount.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,66 @@
+#!/usr/bin/env python
+#
+# unreadmsgcount.py -- Demo to return unread message count with saved state
+#
+# $Revision: 1.2 $ ($Date: 2005/08/16 06:43:47 $)
+#
+# Author: follower myrealbox com
+#
+# License: GPL 2.0
+#
+#
+# This demo intends to show how account state can be saved between script
+# runs.
+#
+import os
+import sys
+import logging
+
+# Allow us to run using installed `libgmail` or the one in parent directory.
+try:
+    import libgmail
+    logging.warn("Note: Using currently installed `libgmail` version.")
+except ImportError:
+    # Urghhh...
+    sys.path.insert(1,
+                    os.path.realpath(os.path.join(os.path.dirname(__file__),
+                                                  os.path.pardir)))
+
+    import libgmail
+
+    
+if __name__ == "__main__":
+    import sys
+    from getpass import getpass
+
+    try:
+        filename = sys.argv[1]
+    except IndexError:
+        print "Usage: %s <state filename>" % sys.argv[0]
+        raise SystemExit
+
+    if not os.path.isfile(filename):
+        name = raw_input("Gmail account name: ")
+        pw = getpass("Password: ")
+        ga = libgmail.GmailAccount(name, pw)
+
+        print "\nPlease wait, logging in..."
+
+        try:
+            ga.login()
+        except libgmail.GmailLoginFailure:
+            print "\nLogin failed. (Wrong username/password?)"
+            raise SystemExit
+
+        print "Log in successful.\n"
+    else:
+        print "\nDon't wait, not logging in... :-)"
+        ga = libgmail.GmailAccount(
+            state = libgmail.GmailSessionState(filename = filename))
+
+    print "Unread messages: %s" % ga.getUnreadMsgCount()
+
+    print "Saving state..."
+    state = libgmail.GmailSessionState(account = ga).save(filename)
+
+    print "Done."

Added: trunk/bigboard/libgmail/gmail_transport.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/gmail_transport.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,144 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+# ----------------------------------------------------------------------------------
+# Copyleft (K) by Jose Rodriguez. This source is free (GPL)
+# Partially based on John Nielsen ASPN recipe (http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/301740)
+# Partially based on Alessandro Budai recipe (http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/456195)
+# ----------------------------------------------------------------------------------
+
+
+# urllib2 opener to connection through a proxy using the CONNECT method, (useful for SSL)
+# tested with python 2.4
+
+import urllib2
+import urllib
+import httplib
+import socket
+import base64
+
+
+def split_proxy_URL(proxy):
+	if proxy is None:
+	    return None, None, None
+
+	try:
+	    if proxy[:7] != 'http://':  # Ensures proxy string begins with 'http://'
+	        proxy = 'http://' + proxy
+	except:
+	    pass
+
+	proxy_username = proxy_password = None
+
+	urltype, r_type = urllib.splittype(proxy)
+	proxy, XXX = urllib.splithost(r_type)
+	if '@' in proxy:
+	    proxy_username, proxy = proxy.split('@', 1)
+	    if ':' in proxy_username:
+	        proxy_username, proxy_password = proxy_username.split(':', 1)
+
+	return proxy, proxy_username, proxy_password
+
+
+
+class ProxyHTTPConnection(httplib.HTTPConnection):
+
+	_ports = {'http' : 80, 'https' : 443}
+
+	def request(self, method, url, body=None, headers={}):
+		#request is called before connect, so can interpret url and get
+		#real host/port to be used to make CONNECT request to proxy
+		proto, rest = urllib.splittype(url)
+		if proto is None:
+			raise ValueError, "unknown URL type: %s" % url
+
+		host, rest = urllib.splithost(rest) # get host
+		host, port = urllib.splitport(host) #try to get port
+
+		#if port is not defined try to get from proto
+		if port is None:
+			try:
+				port = self._ports[proto]
+			except KeyError:
+				raise ValueError, "unknown protocol for: %s" % url
+
+		self._real_host = host
+		self._real_port = port
+		httplib.HTTPConnection.request(self, method, url, body, headers)
+		
+
+	def connect(self):
+		httplib.HTTPConnection.connect(self)
+
+		self.send("CONNECT %s:%d HTTP/1.0\r\n" % (self._real_host, self._real_port))
+		if self.proxy_user is not None and self.proxy_passwd is not None:
+			cred = base64.encodestring("%s:%s" % (urllib.unquote(self.proxy_user), urllib.unquote(self.proxy_passwd))).strip()
+			self.send("Proxy-authorization: Basic %s\r\n" % cred)
+
+		self.send("User-Agent: Mozilla/5.0 (Compatible; libgmail-python)\r\n\r\n")
+		response = self.response_class(self.sock, strict=self.strict, method=self._method)
+		(version, code, message) = response._read_status()
+		#probably here we can handle auth requests...
+		if code != 200:
+			#proxy returned and error, abort connection, and raise exception
+			self.close()
+			raise socket.error, "Proxy connection failed: %d %s" % (code, message.strip())
+
+		#eat up header block from proxy....
+		while True:
+			line = response.fp.readline() #should not use directly fp probablu
+			if line == '\r\n': break
+
+
+	@classmethod
+	def new_auth(cls, proxy_host, proxy_user = None, proxy_passwd = None):
+		cls.proxy_host = proxy_host
+		cls.proxy_user = proxy_user
+		cls.proxy_passwd = proxy_passwd
+
+		return cls
+
+
+
+class ProxyHTTPSConnection(ProxyHTTPConnection):
+	
+	default_port = 443
+
+	def __init__(self, host, port = None, key_file = None, cert_file = None, strict = None):
+		ProxyHTTPConnection.__init__(self, host, port)
+		self.key_file = key_file
+		self.cert_file = cert_file
+	
+	def connect(self):
+		ProxyHTTPConnection.connect(self)
+		#make the sock ssl-aware
+		ssl = socket.ssl(self.sock, self.key_file, self.cert_file)
+		self.sock = httplib.FakeSocket(self.sock, ssl)
+
+		
+
+class ConnectHTTPHandler(urllib2.HTTPHandler):
+   
+	def __init__(self, proxy=None, debuglevel=0):
+		self.proxy, self.proxy_user, self.proxy_passwd = split_proxy_URL(proxy)
+		urllib2.HTTPHandler.__init__(self, debuglevel)
+
+	def do_open(self, http_class, req):
+		if self.proxy is not None:
+			req.set_proxy(self.proxy, 'http')
+		return urllib2.HTTPHandler.do_open(self, ProxyHTTPConnection.new_auth(self.proxy, self.proxy_user, self.proxy_passwd), req)
+	
+
+
+class ConnectHTTPSHandler(urllib2.HTTPSHandler):
+
+	def __init__(self, proxy=None, debuglevel=0):
+		self.proxy, self.proxy_user, self.proxy_passwd = split_proxy_URL(proxy)
+		urllib2.HTTPSHandler.__init__(self, debuglevel)
+
+	def do_open(self, http_class, req):
+		if self.proxy is not None:
+			req.set_proxy(self.proxy, 'https')
+		return urllib2.HTTPSHandler.do_open(self, ProxyHTTPSConnection.new_auth(self.proxy, self.proxy_user, self.proxy_passwd), req)
+
+

Added: trunk/bigboard/libgmail/lgconstants.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/lgconstants.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,231 @@
+#
+# Generated file -- DO NOT EDIT
+#
+# Note: This file is now edited! 2005-04-25
+#
+# constants.py -- Useful constants extracted from Gmail Javascript code
+#
+# Source version: 44f09303f2d4f76f
+#
+# Generated: 2004-08-10 13:08 UTC
+#
+
+
+URL_LOGIN = "https://www.google.com/accounts/ServiceLoginBoxAuth";
+URL_GMAIL = "https://mail.google.com/mail/";
+
+
+# Constants with names not from the Gmail Javascript:
+U_SAVEDRAFT_VIEW = "sd"
+
+D_DRAFTINFO = "di"
+# NOTE: All other DI_* field offsets seem to match the MI_* field offsets
+DI_BODY = 19
+
+versionWarned = False # If the Javascript version is different have we
+                      # warned about it?
+
+
+js_version = '44f09303f2d4f76f'
+
+D_VERSION = "v"
+D_QUOTA = "qu"
+D_DEFAULTSEARCH_SUMMARY = "ds"
+D_THREADLIST_SUMMARY = "ts"
+D_THREADLIST_END = "te"
+D_THREAD = "t"
+D_CONV_SUMMARY = "cs"
+D_CONV_END = "ce"
+D_MSGINFO = "mi"
+D_MSGBODY = "mb"
+D_MSGATT = "ma"
+D_COMPOSE = "c"
+D_CONTACT = "co"
+D_CATEGORIES = "ct"
+D_CATEGORIES_COUNT_ALL = "cta"
+D_ACTION_RESULT = "ar"
+D_SENDMAIL_RESULT = "sr"
+D_PREFERENCES = "p"
+D_PREFERENCES_PANEL = "pp"
+D_FILTERS = "fi"
+D_GAIA_NAME = "gn"
+D_INVITE_STATUS = "i"
+D_END_PAGE = "e"
+D_LOADING = "l"
+D_LOADED_SUCCESS = "ld"
+D_LOADED_ERROR = "le"
+D_QUICKLOADED = "ql"
+QU_SPACEUSED = 0
+QU_QUOTA = 1
+QU_PERCENT = 2
+QU_COLOR = 3
+TS_START = 0
+TS_NUM = 1
+TS_TOTAL = 2
+TS_ESTIMATES = 3
+TS_TITLE = 4
+TS_TIMESTAMP = 5 + 1
+TS_TOTAL_MSGS = 6 + 1
+T_THREADID = 0
+T_UNREAD = 1
+T_STAR = 2
+T_DATE_HTML = 3
+T_AUTHORS_HTML = 4
+T_FLAGS = 5
+T_SUBJECT_HTML = 6
+T_SNIPPET_HTML = 7
+T_CATEGORIES = 8
+T_ATTACH_HTML = 9
+T_MATCHING_MSGID = 10
+T_EXTRA_SNIPPET = 11
+CS_THREADID = 0
+CS_SUBJECT = 1
+CS_TITLE_HTML = 2
+CS_SUMMARY_HTML = 3
+CS_CATEGORIES = 4
+CS_PREVNEXTTHREADIDS = 5
+CS_THREAD_UPDATED = 6
+CS_NUM_MSGS = 7
+CS_ADKEY = 8
+CS_MATCHING_MSGID = 9
+MI_FLAGS = 0
+MI_NUM = 1
+MI_MSGID = 2
+MI_STAR = 3
+MI_REFMSG = 4
+MI_AUTHORNAME = 5
+MI_AUTHORFIRSTNAME = 6 # ? -- Name supplied by rj
+MI_AUTHOREMAIL = 6 + 1
+MI_MINIHDRHTML = 7 + 1
+MI_DATEHTML = 8 + 1
+MI_TO = 9 + 1
+MI_CC = 10 + 1
+MI_BCC = 11 + 1
+MI_REPLYTO = 12 + 1
+MI_DATE = 13 + 1
+MI_SUBJECT = 14 + 1
+MI_SNIPPETHTML = 15 + 1
+MI_ATTACHINFO = 16 + 1
+MI_KNOWNAUTHOR = 17 + 1
+MI_PHISHWARNING = 18 + 1
+A_ID = 0
+A_FILENAME = 1
+A_MIMETYPE = 2
+A_FILESIZE = 3
+CT_NAME = 0
+CT_COUNT = 1
+AR_SUCCESS = 0
+AR_MSG = 1
+SM_COMPOSEID = 0
+SM_SUCCESS = 1
+SM_MSG = 2
+SM_NEWTHREADID = 3
+CMD_SEARCH = "SEARCH"
+ACTION_TOKEN_COOKIE = "GMAIL_AT"
+U_VIEW = "view"
+U_PAGE_VIEW = "page"
+U_THREADLIST_VIEW = "tl"
+U_CONVERSATION_VIEW = "cv"
+U_COMPOSE_VIEW = "cm"
+U_PRINT_VIEW = "pt"
+U_PREFERENCES_VIEW = "pr"
+U_JSREPORT_VIEW = "jr"
+U_UPDATE_VIEW = "up"
+U_SENDMAIL_VIEW = "sm"
+U_AD_VIEW = "ad"
+U_REPORT_BAD_RELATED_INFO_VIEW = "rbri"
+U_ADDRESS_VIEW = "address"
+U_ADDRESS_IMPORT_VIEW = "ai"
+U_SPELLCHECK_VIEW = "sc"
+U_INVITE_VIEW = "invite"
+U_ORIGINAL_MESSAGE_VIEW = "om"
+U_ATTACHMENT_VIEW = "att"
+U_DEBUG_ADS_RESPONSE_VIEW = "da"
+U_SEARCH = "search"
+U_INBOX_SEARCH = "inbox"
+U_STARRED_SEARCH = "starred"
+U_ALL_SEARCH = "all"
+U_DRAFTS_SEARCH = "drafts"
+U_SENT_SEARCH = "sent"
+U_SPAM_SEARCH = "spam"
+U_TRASH_SEARCH = "trash"
+U_QUERY_SEARCH = "query"
+U_ADVANCED_SEARCH = "adv"
+U_CREATEFILTER_SEARCH = "cf"
+U_CATEGORY_SEARCH = "cat"
+U_AS_FROM = "as_from"
+U_AS_TO = "as_to"
+U_AS_SUBJECT = "as_subj"
+U_AS_SUBSET = "as_subset"
+U_AS_HAS = "as_has"
+U_AS_HASNOT = "as_hasnot"
+U_AS_ATTACH = "as_attach"
+U_AS_WITHIN = "as_within"
+U_AS_DATE = "as_date"
+U_AS_SUBSET_ALL = "all"
+U_AS_SUBSET_INBOX = "inbox"
+U_AS_SUBSET_STARRED = "starred"
+U_AS_SUBSET_SENT = "sent"
+U_AS_SUBSET_DRAFTS = "drafts"
+U_AS_SUBSET_SPAM = "spam"
+U_AS_SUBSET_TRASH = "trash"
+U_AS_SUBSET_ALLSPAMTRASH = "ast"
+U_AS_SUBSET_READ = "read"
+U_AS_SUBSET_UNREAD = "unread"
+U_AS_SUBSET_CATEGORY_PREFIX = "cat_"
+U_THREAD = "th"
+U_PREV_THREAD = "prev"
+U_NEXT_THREAD = "next"
+U_DRAFT_MSG = "draft"
+U_START = "start"
+U_ACTION = "act"
+U_ACTION_TOKEN = "at"
+U_INBOX_ACTION = "ib"
+U_MARKREAD_ACTION = "rd"
+U_MARKUNREAD_ACTION = "ur"
+U_MARKSPAM_ACTION = "sp"
+U_UNMARKSPAM_ACTION = "us"
+U_MARKTRASH_ACTION = "tr"
+U_ADDCATEGORY_ACTION = "ac_"
+U_REMOVECATEGORY_ACTION = "rc_"
+U_ADDSTAR_ACTION = "st"
+U_REMOVESTAR_ACTION = "xst"
+U_ADDSENDERTOCONTACTS_ACTION = "astc"
+U_DELETEMESSAGE_ACTION = "dm"
+U_DELETE_ACTION = "dl"
+U_EMPTYSPAM_ACTION = "es_"
+U_EMPTYTRASH_ACTION = "et_"
+U_SAVEPREFS_ACTION = "prefs"
+U_ADDRESS_ACTION = "a"
+U_CREATECATEGORY_ACTION = "cc_"
+U_DELETECATEGORY_ACTION = "dc_"
+U_RENAMECATEGORY_ACTION = "nc_"
+U_CREATEFILTER_ACTION = "cf"
+U_REPLACEFILTER_ACTION = "rf"
+U_DELETEFILTER_ACTION = "df_"
+U_ACTION_THREAD = "t"
+U_ACTION_MESSAGE = "m"
+U_ACTION_PREF_PREFIX = "p_"
+U_REFERENCED_MSG = "rm"
+U_COMPOSEID = "cmid"
+U_COMPOSE_MODE = "cmode"
+U_COMPOSE_SUBJECT = "su"
+U_COMPOSE_TO = "to"
+U_COMPOSE_CC = "cc"
+U_COMPOSE_BCC = "bcc"
+U_COMPOSE_BODY = "body"
+U_PRINT_THREAD = "pth"
+CONV_VIEW = "conv"
+TLIST_VIEW = "tlist"
+PREFS_VIEW = "prefs"
+HIST_VIEW = "hist"
+COMPOSE_VIEW = "comp"
+HIDDEN_ACTION = 0
+USER_ACTION = 1
+BACKSPACE_ACTION = 2
+
+# TODO: Get these on the fly?
+STANDARD_FOLDERS = [U_INBOX_SEARCH, U_STARRED_SEARCH,
+                    U_ALL_SEARCH, U_DRAFTS_SEARCH,
+                    U_SENT_SEARCH, U_SPAM_SEARCH]
+

Added: trunk/bigboard/libgmail/lgcontacts.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/lgcontacts.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,445 @@
+"""\n
+########################################################################
+libgmail contacts stuff
+
+This is work in progress and might be a wrong appraoch.
+
+Contact Stas through the libgmail mailinglist for more info.
+#######################################################################\n
+"""
+
+print __doc__
+
+import urllib,urllib2
+from lgconstants import *
+
+def _buildURL(**kwargs):
+    """Helper function
+    """
+    return "%s?%s" % (URL_GMAIL, urllib.urlencode(kwargs))
+
+class GContacts:
+    """Initial attempt to move the contacts stuff into a module of it's own"""
+    def __init__(self,ga):
+        """@ga must be a GmailAccount object with the login method called."""
+        self.ga = ga
+
+    def getContacts(self):
+        """
+        Returns a GmailContactList object
+        that has all the contacts in it as
+        GmailContacts
+        """
+        contactList = []
+        # pnl = a is necessary to get *all* contacts
+        myUrl = _buildURL(view='cl',search='contacts', pnl='a')
+        ## Reminder: Why are there two _parsePage functions, one in the
+        ## ga class and one in the libgmail toplevel code.
+        myData = self.ga._parsePage(myUrl)
+        # This comes back with a dictionary
+        # with entry 'cl'
+        addresses = myData['cl']
+        for entry in addresses:
+            if len(entry) >= 6 and entry[0]=='ce':
+                newGmailContact = GmailContact(entry[1], entry[2], entry[4], entry[5])
+                #### new code used to get all the notes 
+                #### not used yet due to lockdown problems
+                ##rawnotes = self._getSpecInfo(entry[1])
+                ##print rawnotes
+                ##newGmailContact = GmailContact(entry[1], entry[2], entry[4],rawnotes)
+                contactList.append(newGmailContact)
+
+        return GmailContactList(contactList)
+
+    def addContact(self, myContact, *extra_args):
+        """
+        Attempts to add a GmailContact to the gmail
+        address book. Returns true if successful,
+        false otherwise
+
+        Please note that after version 0.1.3.3,
+        addContact takes one argument of type
+        GmailContact, the contact to add.
+
+        The old signature of:
+        addContact(name, email, notes='') is still
+        supported, but deprecated. 
+        """
+        if len(extra_args) > 0:
+            # The user has passed in extra arguments
+            # He/she is probably trying to invoke addContact
+            # using the old, deprecated signature of:
+            # addContact(self, name, email, notes='')        
+            # Build a GmailContact object and use that instead
+            (name, email) = (myContact, extra_args[0])
+            if len(extra_args) > 1:
+                notes = extra_args[1]
+            else:
+                notes = ''
+            myContact = GmailContact(-1, name, email, notes)
+
+        # TODO: In the ideal world, we'd extract these specific
+        # constants into a nice constants file
+        
+        # This mostly comes from the Johnvey Gmail API,
+        # but also from the gmail.py cited earlier
+        myURL = _buildURL(view='up')        
+
+        myDataList =  [ ('act','ec'),
+                        ('at', self.ga._cookieJar._cookies['GMAIL_AT']), # Cookie data?
+                        ('ct_nm', myContact.getName()),
+                        ('ct_em', myContact.getEmail()),
+                        ('ct_id', -1 )
+                       ]
+
+        notes = myContact.getNotes()
+        if notes != '':
+            myDataList.append( ('ctf_n', notes) )
+
+        validinfokeys = [
+                        'i', # IM
+                        'p', # Phone
+                        'd', # Company
+                        'a', # ADR
+                        'e', # Email
+                        'm', # Mobile
+                        'b', # Pager
+                        'f', # Fax
+                        't', # Title
+                        'o', # Other
+                        ]
+
+        moreInfo = myContact.getMoreInfo()
+        ctsn_num = -1
+        if moreInfo != {}:
+            for ctsf,ctsf_data in moreInfo.items():
+                ctsn_num += 1
+                # data section header, WORK, HOME,...
+                sectionenum ='ctsn_%02d' % ctsn_num
+                myDataList.append( ( sectionenum, ctsf ))
+                ctsf_num = -1
+
+                if isinstance(ctsf_data[0],str):
+                    ctsf_num += 1
+                    # data section
+                    subsectionenum = 'ctsf_%02d_%02d_%s' % (ctsn_num, ctsf_num, ctsf_data[0])  # ie. ctsf_00_01_p
+                    myDataList.append( (subsectionenum, ctsf_data[1]) )
+                else:
+                    for info in ctsf_data:
+                        if validinfokeys.count(info[0]) > 0:
+                            ctsf_num += 1
+                            # data section
+                            subsectionenum = 'ctsf_%02d_%02d_%s' % (ctsn_num, ctsf_num, info[0])  # ie. ctsf_00_01_p
+                            myDataList.append( (subsectionenum, info[1]) )
+
+        myData = urllib.urlencode(myDataList)
+        request = urllib2.Request(myURL,
+                                  data = myData)
+        pageData = self.ga._retrievePage(request)
+
+        if pageData.find("The contact was successfully added") == -1:
+            print pageData
+            if pageData.find("already has the email address") > 0:
+                raise Exception("Someone with same email already exists in Gmail.")
+            elif pageData.find("https://www.google.com/accounts/ServiceLogin";):
+                raise Exception("Login has expired.")
+            return False
+        else:
+            return True
+
+    def _removeContactById(self, id):
+        """
+        Attempts to remove the contact that occupies
+        id "id" from the gmail address book.
+        Returns True if successful,
+        False otherwise.
+
+        This is a little dangerous since you don't really
+        know who you're deleting. Really,
+        this should return the name or something of the
+        person we just killed.
+
+        Don't call this method.
+        You should be using removeContact instead.
+        """
+        myURL = _buildURL(search='contacts', ct_id = id, c=id, act='dc', at=self.ga._cookieJar._cookies['GMAIL_AT'], view='up')
+        pageData = self.ga._retrievePage(myURL)
+
+        if pageData.find("The contact has been deleted") == -1:
+            return False
+        else:
+            return True
+
+    def removeContact(self, gmailContact):
+        """
+        Attempts to remove the GmailContact passed in
+        Returns True if successful, False otherwise.
+        """
+        # Let's re-fetch the contact list to make
+        # sure we're really deleting the guy
+        # we think we're deleting
+        newContactList = self.getContacts()
+        newVersionOfPersonToDelete = newContactList.getContactById(gmailContact.getId())
+        # Ok, now we need to ensure that gmailContact
+        # is the same as newVersionOfPersonToDelete
+        # and then we can go ahead and delete him/her
+        if (gmailContact == newVersionOfPersonToDelete):
+            return self._removeContactById(gmailContact.getId())
+        else:
+            # We have a cache coherency problem -- someone
+            # else now occupies this ID slot.
+            # TODO: Perhaps signal this in some nice way
+            #       to the end user?
+            
+            print "Unable to delete."
+            print "Has someone else been modifying the contacts list while we have?"
+            print "Old version of person:",gmailContact
+            print "New version of person:",newVersionOfPersonToDelete
+            return False
+
+## Don't remove this. contact stas
+##    def _getSpecInfo(self,id):
+##        """
+##        Return all the notes data.
+##        This is currently not used due to the fact that it requests pages in 
+##        a dos attack manner.
+##        """
+##        myURL =_buildURL(search='contacts',ct_id=id,c=id,\
+##                        at=self._cookieJar._cookies['GMAIL_AT'],view='ct')
+##        pageData = self._retrievePage(myURL)
+##        myData = self._parsePage(myURL)
+##        #print "\nmyData form _getSpecInfo\n",myData
+##        rawnotes = myData['cov'][7]
+##        return rawnotes
+
+class GmailContact:
+    """
+    Class for storing a Gmail Contacts list entry
+    """
+    def __init__(self, name, email, *extra_args):
+        """
+        Returns a new GmailContact object
+        (you can then call addContact on this to commit
+         it to the Gmail addressbook, for example)
+
+        Consider calling setNotes() and setMoreInfo()
+        to add extended information to this contact
+        """
+        # Support populating other fields if we're trying
+        # to invoke this the old way, with the old constructor
+        # whose signature was __init__(self, id, name, email, notes='')
+        id = -1
+        notes = ''
+   
+        if len(extra_args) > 0:
+            (id, name) = (name, email)
+            email = extra_args[0]
+            if len(extra_args) > 1:
+                notes = extra_args[1]
+            else:
+                notes = ''
+
+        self.id = id
+        self.name = name
+        self.email = email
+        self.notes = notes
+        self.moreInfo = {}
+    def __str__(self):
+        return "%s %s %s %s" % (self.id, self.name, self.email, self.notes)
+    def __eq__(self, other):
+        if not isinstance(other, GmailContact):
+            return False
+        return (self.getId() == other.getId()) and \
+               (self.getName() == other.getName()) and \
+               (self.getEmail() == other.getEmail()) and \
+               (self.getNotes() == other.getNotes())
+    def getId(self):
+        return self.id
+    def getName(self):
+        return self.name
+    def getEmail(self):
+        return self.email
+    def getNotes(self):
+        return self.notes
+    def setNotes(self, notes):
+        """
+        Sets the notes field for this GmailContact
+        Note that this does NOT change the note
+        field on Gmail's end; only adding or removing
+        contacts modifies them
+        """
+        self.notes = notes
+
+    def getMoreInfo(self):
+        return self.moreInfo
+    def setMoreInfo(self, moreInfo):
+        """
+        moreInfo format
+        ---------------
+        Use special key values::
+                        'i' =  IM
+                        'p' =  Phone
+                        'd' =  Company
+                        'a' =  ADR
+                        'e' =  Email
+                        'm' =  Mobile
+                        'b' =  Pager
+                        'f' =  Fax
+                        't' =  Title
+                        'o' =  Other
+
+        Simple example::
+
+        moreInfo = {'Home': ( ('a','852 W Barry'),
+                              ('p', '1-773-244-1980'),
+                              ('i', 'aim:brianray34') ) }
+
+        Complex example::
+
+        moreInfo = {
+            'Personal': (('e', 'Home Email'),
+                         ('f', 'Home Fax')),
+            'Work': (('d', 'Sample Company'),
+                     ('t', 'Job Title'),
+                     ('o', 'Department: Department1'),
+                     ('o', 'Department: Department2'),
+                     ('p', 'Work Phone'),
+                     ('m', 'Mobile Phone'),
+                     ('f', 'Work Fax'),
+                     ('b', 'Pager')) }
+        """
+        self.moreInfo = moreInfo 
+    def getVCard(self):
+        """Returns a vCard 3.0 for this
+        contact, as a string"""
+        # The \r is is to comply with the RFC2425 section 5.8.1
+        vcard = "BEGIN:VCARD\r\n"
+        vcard += "VERSION:3.0\r\n"
+        ## Deal with multiline notes
+        ##vcard += "NOTE:%s\n" % self.getNotes().replace("\n","\\n")
+        vcard += "NOTE:%s\r\n" % self.getNotes()
+        # Fake-out N by splitting up whatever we get out of getName
+        # This might not always do 'the right thing'
+        # but it's a *reasonable* compromise
+        fullname = self.getName().split()
+        fullname.reverse()
+        vcard += "N:%s" % ';'.join(fullname) + "\r\n"
+        vcard += "FN:%s\r\n" % self.getName()
+        vcard += "EMAIL;TYPE=INTERNET:%s\r\n" % self.getEmail()
+        vcard += "END:VCARD\r\n\r\n"
+        # Final newline in case we want to put more than one in a file
+        return vcard
+
+class GmailContactList:
+    """
+    Class for storing an entire Gmail contacts list
+    and retrieving contacts by Id, Email address, and name
+    """
+    def __init__(self, contactList):
+        self.contactList = contactList
+    def __str__(self):
+        return '\n'.join([str(item) for item in self.contactList])
+    def getCount(self):
+        """
+        Returns number of contacts
+        """
+        return len(self.contactList)
+    def getAllContacts(self):
+        """
+        Returns an array of all the
+        GmailContacts
+        """
+        return self.contactList
+    def getContactByName(self, name):
+        """
+        Gets the first contact in the
+        address book whose name is 'name'.
+
+        Returns False if no contact
+        could be found
+        """
+        nameList = self.getContactListByName(name)
+        if len(nameList) > 0:
+            return nameList[0]
+        else:
+            return False
+    def getContactByEmail(self, email):
+        """
+        Gets the first contact in the
+        address book whose name is 'email'.
+        As of this writing, Gmail insists
+        upon a unique email; i.e. two contacts
+        cannot share an email address.
+
+        Returns False if no contact
+        could be found
+        """
+        emailList = self.getContactListByEmail(email)
+        if len(emailList) > 0:
+            return emailList[0]
+        else:
+            return False
+    def getContactById(self, myId):
+        """
+        Gets the first contact in the
+        address book whose id is 'myId'.
+
+        REMEMBER: ID IS A STRING
+
+        Returns False if no contact
+        could be found
+        """
+        idList = self.getContactListById(myId)
+        if len(idList) > 0:
+            return idList[0]
+        else:
+            return False
+    def getContactListByName(self, name):
+        """
+        This function returns a LIST
+        of GmailContacts whose name is
+        'name'. 
+
+        Returns an empty list if no contacts
+        were found
+        """
+        nameList = []
+        for entry in self.contactList:
+            if entry.getName() == name:
+                nameList.append(entry)
+        return nameList
+    def getContactListByEmail(self, email):
+        """
+        This function returns a LIST
+        of GmailContacts whose email is
+        'email'. As of this writing, two contacts
+        cannot share an email address, so this
+        should only return just one item.
+        But it doesn't hurt to be prepared?
+
+        Returns an empty list if no contacts
+        were found
+        """
+        emailList = []
+        for entry in self.contactList:
+            if entry.getEmail() == email:
+                emailList.append(entry)
+        return emailList
+    def getContactListById(self, myId):
+        """
+        This function returns a LIST
+        of GmailContacts whose id is
+        'myId'. We expect there only to
+        be one, but just in case!
+
+        Remember: ID IS A STRING
+
+        Returns an empty list if no contacts
+        were found
+        """
+        idList = []
+        for entry in self.contactList:
+            if entry.getId() == myId:
+                idList.append(entry)
+        return idList
+        

Added: trunk/bigboard/libgmail/libgmail.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/libgmail.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,1624 @@
+#!/usr/bin/env python
+#
+# libgmail -- Gmail access via Python
+#
+## To get the version number of the available libgmail version.
+## Reminder: add date before next release. This attribute is also
+## used in the setup script.
+Version = '0.1.9' # (Apr 2008)
+
+# Original author: follower rancidbacon com
+# Maintainers: Waseem (wdaher mit edu) and Stas Z (stas linux isbeter nl)
+#
+# License: GPL 2.0
+#
+# NOTE:
+#   You should ensure you are permitted to use this script before using it
+#   to access Google's Gmail servers.
+#
+#
+# Gmail Implementation Notes
+# ==========================
+#
+# * Folders contain message threads, not individual messages. At present I
+#   do not know any way to list all messages without processing thread list.
+#
+
+LG_DEBUG=0
+from lgconstants import *
+
+import os,pprint
+import re
+import urllib
+import urllib2
+import mimetypes
+import types
+import ClientCookie
+from cPickle import load, dump
+
+from email.MIMEBase import MIMEBase
+from email.MIMEText import MIMEText
+from email.MIMEMultipart import MIMEMultipart
+
+GMAIL_URL_LOGIN = "https://www.google.com/accounts/ServiceLoginBoxAuth";
+GMAIL_URL_GMAIL = "https://mail.google.com/mail/?ui=1&";
+
+#  Set to any value to use proxy.
+PROXY_URL = None  # e.g. libgmail.PROXY_URL = 'myproxy.org:3128'
+
+# TODO: Get these on the fly?
+STANDARD_FOLDERS = [U_INBOX_SEARCH, U_STARRED_SEARCH,
+                    U_ALL_SEARCH, U_DRAFTS_SEARCH,
+                    U_SENT_SEARCH, U_SPAM_SEARCH]
+
+# Constants with names not from the Gmail Javascript:
+# TODO: Move to `lgconstants.py`?
+U_SAVEDRAFT_VIEW = "sd"
+
+D_DRAFTINFO = "di"
+# NOTE: All other DI_* field offsets seem to match the MI_* field offsets
+DI_BODY = 19
+
+versionWarned = False # If the Javascript version is different have we
+                      # warned about it?
+
+
+RE_SPLIT_PAGE_CONTENT = re.compile("D\((.*?)\);", re.DOTALL)
+
+class GmailError(Exception):
+    '''
+    Exception thrown upon gmail-specific failures, in particular a
+    failure to log in and a failure to parse responses.
+
+    '''
+    pass
+
+class GmailSendError(Exception):
+    '''
+    Exception to throw if we're unable to send a message
+    '''
+    pass
+
+def _parsePage(pageContent):
+    """
+    Parse the supplied HTML page and extract useful information from
+    the embedded Javascript.
+    
+    """
+    lines = pageContent.splitlines()
+    data = '\n'.join([x for x in lines if x and x[0] in ['D', ')', ',', ']']])
+    #data = data.replace(',,',',').replace(',,',',')
+    data = re.sub(',{2,}', ',', data)
+    
+    result = []
+    try:
+        exec data in {'__builtins__': None}, {'D': lambda x: result.append(x)}
+    except SyntaxError,info:
+        print info
+        raise GmailError, 'Failed to parse data returned from gmail.'
+
+    items = result 
+    itemsDict = {}
+    namesFoundTwice = []
+    for item in items:
+        name = item[0]
+        try:
+            parsedValue = item[1:]
+        except Exception:
+            parsedValue = ['']
+        if itemsDict.has_key(name):
+            # This handles the case where a name key is used more than
+            # once (e.g. mail items, mail body etc) and automatically
+            # places the values into list.
+            # TODO: Check this actually works properly, it's early... :-)
+            
+            if len(parsedValue) and type(parsedValue[0]) is types.ListType:
+                    for item in parsedValue:
+                        itemsDict[name].append(item)
+            else:
+                itemsDict[name].append(parsedValue)
+        else:
+            if len(parsedValue) and type(parsedValue[0]) is types.ListType:
+                    itemsDict[name] = []
+                    for item in parsedValue:
+                        itemsDict[name].append(item)
+            else:
+                itemsDict[name] = [parsedValue]
+
+    return itemsDict
+
+def _splitBunches(infoItems):# Is this still needed ?? Stas
+    """
+    Utility to help make it easy to iterate over each item separately,
+    even if they were bunched on the page.
+    """
+    result= []
+    # TODO: Decide if this is the best approach.
+    for group in infoItems:
+        if type(group) == tuple:
+            result.extend(group)
+        else:
+            result.append(group)
+    return result
+
+class SmartRedirectHandler(ClientCookie.HTTPRedirectHandler):
+    def __init__(self, cookiejar):
+        self.cookiejar = cookiejar
+
+    def http_error_302(self, req, fp, code, msg, headers):
+        # The location redirect doesn't seem to change
+        # the hostname header appropriately, so we do
+        # by hand. (Is this a bug in urllib2?)
+        new_host = re.match(r'http[s]*://(.*?\.google\.com)',
+                            headers.getheader('Location'))
+        if new_host:
+            req.add_header("Host", new_host.groups()[0])
+        result = ClientCookie.HTTPRedirectHandler.http_error_302(
+            self, req, fp, code, msg, headers)              
+        return result
+       
+    
+def _buildURL(**kwargs):
+    """
+    """
+    return "%s%s" % (URL_GMAIL, urllib.urlencode(kwargs))
+
+
+
+def _paramsToMime(params, filenames, files):
+    """
+    """
+    mimeMsg = MIMEMultipart("form-data")
+
+    for name, value in params.iteritems():
+        mimeItem = MIMEText(value)
+        mimeItem.add_header("Content-Disposition", "form-data", name=name)
+
+        # TODO: Handle this better...?
+        for hdr in ['Content-Type','MIME-Version','Content-Transfer-Encoding']:
+            del mimeItem[hdr]
+
+        mimeMsg.attach(mimeItem)
+
+    if filenames or files:
+        filenames = filenames or []
+        files = files or []
+        for idx, item in enumerate(filenames + files):
+            # TODO: This is messy, tidy it...
+            if isinstance(item, str):
+                # We assume it's a file path...
+                filename = item
+                contentType = mimetypes.guess_type(filename)[0]
+                payload = open(filename, "rb").read()
+            else:
+                # We assume it's an `email.Message.Message` instance...
+                # TODO: Make more use of the pre-encoded information?
+                filename = item.get_filename()
+                contentType = item.get_content_type()
+                payload = item.get_payload(decode=True)
+                
+            if not contentType:
+                contentType = "application/octet-stream"
+                
+            mimeItem = MIMEBase(*contentType.split("/"))
+            mimeItem.add_header("Content-Disposition", "form-data",
+                                name="file%s" % idx, filename=filename)
+            # TODO: Encode the payload?
+            mimeItem.set_payload(payload)
+
+            # TODO: Handle this better...?
+            for hdr in ['MIME-Version','Content-Transfer-Encoding']:
+                del mimeItem[hdr]
+
+            mimeMsg.attach(mimeItem)
+
+    del mimeMsg['MIME-Version']
+
+    return mimeMsg
+
+
+class GmailLoginFailure(Exception):
+    """
+    Raised whenever the login process fails--could be wrong username/password,
+    or Gmail service error, for example.
+    Extract the error message like this:
+    try:
+        foobar 
+    except GmailLoginFailure,e:
+        mesg = e.message# or
+        print e# uses the __str__
+    """
+    def __init__(self,message):
+        self.message = message
+    def __str__(self):
+        return repr(self.message)
+
+class GmailAccount:
+    """
+    """
+
+    def __init__(self, name = "", pw = "", state = None, domain = None):
+        global URL_LOGIN, URL_GMAIL
+        """
+        """
+        self.domain = domain
+        if self.domain:
+            URL_LOGIN = "https://www.google.com/a/"; + self.domain + "/LoginAction2"
+            URL_GMAIL = "http://mail.google.com/a/"; + self.domain + "/?ui=1&"
+
+        else:
+            URL_LOGIN = GMAIL_URL_LOGIN
+            URL_GMAIL = GMAIL_URL_GMAIL
+        if name and pw:
+            self.name = name
+            self._pw = pw
+
+            self._cookieJar = ClientCookie.LWPCookieJar()
+            opener = ClientCookie.build_opener(ClientCookie.HTTPCookieProcessor(self._cookieJar))
+            ClientCookie.install_opener(opener)
+            
+            if PROXY_URL is not None:
+                import gmail_transport
+
+                self.opener = ClientCookie.build_opener(gmail_transport.ConnectHTTPHandler(proxy = PROXY_URL),
+                                  gmail_transport.ConnectHTTPSHandler(proxy = PROXY_URL),
+                                  SmartRedirectHandler(self._cookieJar))
+            else:
+                self.opener = ClientCookie.build_opener(
+                                ClientCookie.HTTPHandler(debuglevel=0),
+                                ClientCookie.HTTPSHandler(debuglevel=0),
+                                SmartRedirectHandler(self._cookieJar))
+        elif state:
+            # TODO: Check for stale state cookies?
+            self.name, self._cookieJar = state.state
+        else:
+            raise ValueError("GmailAccount must be instantiated with " \
+                             "either GmailSessionState object or name " \
+                             "and password.")
+
+        self._cachedQuotaInfo = None
+        self._cachedLabelNames = None
+        
+
+    def login(self):
+        """
+        """
+        # TODO: Throw exception if we were instantiated with state?
+        if self.domain:
+            data = urllib.urlencode({'continue': URL_GMAIL,
+                                     'at'      : 'null',
+                                     'service' : 'mail',
+                                     'Email': self.name,
+                                     'Passwd': self._pw,
+                                     })
+        else:
+            data = urllib.urlencode({'continue': URL_GMAIL,
+                                     'Email': self.name,
+                                     'Passwd': self._pw,
+                                     })
+                                           
+        headers = {'Host': 'www.google.com',
+                   'User-Agent': 'Mozilla/5.0 (Compatible; libgmail-python)'}
+
+        req = ClientCookie.Request(URL_LOGIN, data=data, headers=headers)
+        pageData = self._retrievePage(req)
+        
+        if not self.domain:
+        # The GV cookie no longer comes in this page for
+        # "Apps", so this bottom portion is unnecessary for it.
+            # This requests the page that provides the required "GV" cookie.
+            RE_PAGE_REDIRECT = 'CheckCookie\?continue=([^"\']+)' 
+        
+            # TODO: Catch more failure exceptions here...?
+            try:
+                link = re.search(RE_PAGE_REDIRECT, pageData).group(1)
+                redirectURL = urllib2.unquote(link)
+                redirectURL = redirectURL.replace('\\x26', '&')
+            
+            except AttributeError:
+                raise GmailLoginFailure("Login failed. (Wrong username/password?)")
+            # We aren't concerned with the actual content of this page,
+            # just the cookie that is returned with it.
+            pageData = self._retrievePage(redirectURL)
+
+    def getCookie(self,cookiename):
+        # TODO: Is there a way to extract the value directly?
+        for index, cookie in enumerate(self._cookieJar):
+            if cookie.name == cookiename:
+                return cookie.value
+        return ""
+
+    def _retrievePage(self, urlOrRequest):
+        """
+        """
+        if self.opener is None:
+            raise "Cannot find urlopener"
+        
+        # ClientCookieify it, if it hasn't been already
+        if not isinstance(urlOrRequest, urllib2.Request):
+            req = ClientCookie.Request(urlOrRequest)
+        else:
+            req = urlOrRequest
+
+        req.add_header('User-Agent',
+                       'Mozilla/5.0 (Compatible; libgmail-python)')
+        
+        try:
+            resp = self.opener.open(req)
+        except urllib2.HTTPError,info:
+            print info
+            return None
+        pageData = resp.read()
+
+        # TODO: This, for some reason, is still necessary?
+        self._cookieJar.extract_cookies(resp, req)
+
+        # TODO: Enable logging of page data for debugging purposes?
+        return pageData
+
+
+    def _parsePage(self, urlOrRequest):
+        """
+        Retrieve & then parse the requested page content.
+        
+        """
+        items = _parsePage(self._retrievePage(urlOrRequest))
+        # Automatically cache some things like quota usage.
+        # TODO: Cache more?
+        # TODO: Expire cached values?
+        # TODO: Do this better.
+        try:
+            self._cachedQuotaInfo = items[D_QUOTA]
+        except KeyError:
+            pass
+        #pprint.pprint(items)
+        
+        try:
+            self._cachedLabelNames = [category[CT_NAME] for category in items[D_CATEGORIES][0]]
+        except KeyError:
+            pass
+        
+        return items
+
+
+    def _parseSearchResult(self, searchType, start = 0, **kwargs):
+        """
+        """
+        params = {U_SEARCH: searchType,
+                  U_START: start,
+                  U_VIEW: U_THREADLIST_VIEW,
+                  }
+        params.update(kwargs)
+        return self._parsePage(_buildURL(**params))
+
+
+    def _parseThreadSearch(self, searchType, allPages = False, **kwargs):
+        """
+
+        Only works for thread-based results at present. # TODO: Change this?
+        """
+        start = 0
+        tot = 0
+        threadsInfo = []
+        # Option to get *all* threads if multiple pages are used.
+        while (start == 0) or (allPages and
+                               len(threadsInfo) < threadListSummary[TS_TOTAL]):
+            
+                items = self._parseSearchResult(searchType, start, **kwargs)
+                #TODO: Handle single & zero result case better? Does this work?
+                try:
+                    threads = items[D_THREAD]
+                except KeyError:
+                    break
+                else:
+                    for th in threads:
+                        if not type(th[0]) is types.ListType:
+                            th = [th]
+                        threadsInfo.append(th)
+                    # TODO: Check if the total or per-page values have changed?
+                    threadListSummary = items[D_THREADLIST_SUMMARY][0]
+                    threadsPerPage = threadListSummary[TS_NUM]
+    
+                    start += threadsPerPage
+        
+        # TODO: Record whether or not we retrieved all pages..?
+        return GmailSearchResult(self, (searchType, kwargs), threadsInfo)
+
+
+    def _retrieveJavascript(self, version = ""):
+        """
+
+        Note: `version` seems to be ignored.
+        """
+        return self._retrievePage(_buildURL(view = U_PAGE_VIEW,
+                                            name = "js",
+                                            ver = version))
+        
+        
+    def getMessagesByFolder(self, folderName, allPages = False):
+        """
+
+        Folders contain conversation/message threads.
+
+          `folderName` -- As set in Gmail interface.
+
+        Returns a `GmailSearchResult` instance.
+
+        *** TODO: Change all "getMessagesByX" to "getThreadsByX"? ***
+        """
+        return self._parseThreadSearch(folderName, allPages = allPages)
+
+
+    def getMessagesByQuery(self, query,  allPages = False):
+        """
+
+        Returns a `GmailSearchResult` instance.
+        """
+        return self._parseThreadSearch(U_QUERY_SEARCH, q = query,
+                                       allPages = allPages)
+
+    
+    def getQuotaInfo(self, refresh = False):
+        """
+
+        Return MB used, Total MB and percentage used.
+        """
+        # TODO: Change this to a property.
+        if not self._cachedQuotaInfo or refresh:
+            # TODO: Handle this better...
+            self.getMessagesByFolder(U_INBOX_SEARCH)
+
+        return self._cachedQuotaInfo[0][:3]
+
+
+    def getLabelNames(self, refresh = False):
+        """
+        """
+        # TODO: Change this to a property?
+        if not self._cachedLabelNames or refresh:
+            # TODO: Handle this better...
+            self.getMessagesByFolder(U_INBOX_SEARCH)
+
+        return self._cachedLabelNames
+
+
+    def getMessagesByLabel(self, label, allPages = False):
+        """
+        """
+        return self._parseThreadSearch(U_CATEGORY_SEARCH,
+                                       cat=label, allPages = allPages)
+    
+    def getRawMessage(self, msgId):
+        """
+        """
+        # U_ORIGINAL_MESSAGE_VIEW seems the only one that returns a page.
+        # All the other U_* results in a 404 exception. Stas
+        PageView = U_ORIGINAL_MESSAGE_VIEW  
+        return self._retrievePage(
+            _buildURL(view=PageView, th=msgId))
+
+    def getUnreadMessages(self):
+        """
+        """
+        return self._parseThreadSearch(U_QUERY_SEARCH,
+                                        q = "is:" + U_AS_SUBSET_UNREAD)
+        
+        
+    def getUnreadMsgCount(self):
+        """
+        """
+        items = self._parseSearchResult(U_QUERY_SEARCH,
+                                        q = "is:" + U_AS_SUBSET_UNREAD)
+        try:
+            result = items[D_THREADLIST_SUMMARY][0][TS_TOTAL_MSGS]
+        except KeyError:
+            result = 0
+        return result
+
+
+    def _getActionToken(self):
+        """
+        """
+        try:
+            at = self.getCookie(ACTION_TOKEN_COOKIE)
+        except KeyError:
+            self.getLabelNames(True) 
+            at = self.getCookie(ACTION_TOKEN_COOKIE)
+
+        return at
+
+
+    def sendMessage(self, msg, asDraft = False, _extraParams = None):
+        """
+
+          `msg` -- `GmailComposedMessage` instance.
+
+          `_extraParams` -- Dictionary containing additional parameters
+                            to put into POST message. (Not officially
+                            for external use, more to make feature
+                            additional a little easier to play with.)
+        
+        Note: Now returns `GmailMessageStub` instance with populated
+              `id` (and `_account`) fields on success or None on failure.
+
+        """
+        # TODO: Handle drafts separately?
+        params = {U_VIEW: [U_SENDMAIL_VIEW, U_SAVEDRAFT_VIEW][asDraft],
+                  U_REFERENCED_MSG: "",
+                  U_THREAD: "",
+                  U_DRAFT_MSG: "",
+                  U_COMPOSEID: "1",
+                  U_ACTION_TOKEN: self._getActionToken(),
+                  U_COMPOSE_TO: msg.to,
+                  U_COMPOSE_CC: msg.cc,
+                  U_COMPOSE_BCC: msg.bcc,
+                  "subject": msg.subject,
+                  "msgbody": msg.body,
+                  }
+
+        if _extraParams:
+            params.update(_extraParams)
+
+        # Amongst other things, I used the following post to work out this:
+        # <http://groups.google.com/groups?
+        #  selm=mailman.1047080233.20095.python-list%40python.org>
+        mimeMessage = _paramsToMime(params, msg.filenames, msg.files)
+
+        #### TODO: Ughh, tidy all this up & do it better...
+        ## This horrible mess is here for two main reasons:
+        ##  1. The `Content-Type` header (which also contains the boundary
+        ##     marker) needs to be extracted from the MIME message so
+        ##     we can send it as the request `Content-Type` header instead.
+        ##  2. It seems the form submission needs to use "\r\n" for new
+        ##     lines instead of the "\n" returned by `as_string()`.
+        ##     I tried changing the value of `NL` used by the `Generator` class
+        ##     but it didn't work so I'm doing it this way until I figure
+        ##     out how to do it properly. Of course, first try, if the payloads
+        ##     contained "\n" sequences they got replaced too, which corrupted
+        ##     the attachments. I could probably encode the submission,
+        ##     which would probably be nicer, but in the meantime I'm kludging
+        ##     this workaround that replaces all non-text payloads with a
+        ##     marker, changes all "\n" to "\r\n" and finally replaces the
+        ##     markers with the original payloads.
+        ## Yeah, I know, it's horrible, but hey it works doesn't it? If you've
+        ## got a problem with it, fix it yourself & give me the patch!
+        ##
+        origPayloads = {}
+        FMT_MARKER = "&&&&&&%s&&&&&&"
+
+        for i, m in enumerate(mimeMessage.get_payload()):
+            if not isinstance(m, MIMEText): #Do we care if we change text ones?
+                origPayloads[i] = m.get_payload()
+                m.set_payload(FMT_MARKER % i)
+
+        mimeMessage.epilogue = ""
+        msgStr = mimeMessage.as_string()
+        contentTypeHeader, data = msgStr.split("\n\n", 1)
+        contentTypeHeader = contentTypeHeader.split(":", 1)
+        data = data.replace("\n", "\r\n")
+        for k,v in origPayloads.iteritems():
+            data = data.replace(FMT_MARKER % k, v)
+        ####
+        
+        req = ClientCookie.Request(_buildURL(), data = data)
+        req.add_header(*contentTypeHeader)
+        items = self._parsePage(req)
+
+        # TODO: Check composeid?
+        # Sometimes we get the success message
+        # but the id is 0 and no message is sent
+        result = None
+        resultInfo = items[D_SENDMAIL_RESULT][0]
+        
+        if resultInfo[SM_SUCCESS]:
+            result = GmailMessageStub(id = resultInfo[SM_NEWTHREADID],
+                                      _account = self)
+        else:
+            raise GmailSendError, resultInfo[SM_MSG]
+        return result
+
+
+    def trashMessage(self, msg):
+        """
+        """
+        # TODO: Decide if we should make this a method of `GmailMessage`.
+        # TODO: Should we check we have been given a `GmailMessage` instance?
+        params = {
+            U_ACTION: U_DELETEMESSAGE_ACTION,
+            U_ACTION_MESSAGE: msg.id,
+            U_ACTION_TOKEN: self._getActionToken(),
+            }
+
+        items = self._parsePage(_buildURL(**params))
+
+        # TODO: Mark as trashed on success?
+        return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
+
+
+    def _doThreadAction(self, actionId, thread):
+        """
+        """
+        # TODO: Decide if we should make this a method of `GmailThread`.
+        # TODO: Should we check we have been given a `GmailThread` instance?
+        params = {
+            U_SEARCH: U_ALL_SEARCH, #TODO:Check this search value always works.
+            U_VIEW: U_UPDATE_VIEW,
+            U_ACTION: actionId,
+            U_ACTION_THREAD: thread.id,
+            U_ACTION_TOKEN: self._getActionToken(),
+            }
+
+        items = self._parsePage(_buildURL(**params))
+
+        return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
+        
+        
+    def trashThread(self, thread):
+        """
+        """
+        # TODO: Decide if we should make this a method of `GmailThread`.
+        # TODO: Should we check we have been given a `GmailThread` instance?
+
+        result = self._doThreadAction(U_MARKTRASH_ACTION, thread)
+        
+        # TODO: Mark as trashed on success?
+        return result
+
+
+    def _createUpdateRequest(self, actionId): #extraData):
+        """
+        Helper method to create a Request instance for an update (view)
+        action.
+
+        Returns populated `Request` instance.
+        """
+        params = {
+            U_VIEW: U_UPDATE_VIEW,
+            }
+
+        data = {
+            U_ACTION: actionId,
+            U_ACTION_TOKEN: self._getActionToken(),
+            }
+
+        #data.update(extraData)
+
+        req = ClientCookie.Request(_buildURL(**params),
+                              data = urllib.urlencode(data))
+
+        return req
+
+
+    # TODO: Extract additional common code from handling of labels?
+    def createLabel(self, labelName):
+        """
+        """
+        req = self._createUpdateRequest(U_CREATECATEGORY_ACTION + labelName)
+
+        # Note: Label name cache is updated by this call as well. (Handy!)
+        items = self._parsePage(req)
+        print items
+        return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
+
+
+    def deleteLabel(self, labelName):
+        """
+        """
+        # TODO: Check labelName exits?
+        req = self._createUpdateRequest(U_DELETECATEGORY_ACTION + labelName)
+
+        # Note: Label name cache is updated by this call as well. (Handy!)
+        items = self._parsePage(req)
+
+        return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
+
+
+    def renameLabel(self, oldLabelName, newLabelName):
+        """
+        """
+        # TODO: Check oldLabelName exits?
+        req = self._createUpdateRequest("%s%s^%s" % (U_RENAMECATEGORY_ACTION,
+                                                   oldLabelName, newLabelName))
+
+        # Note: Label name cache is updated by this call as well. (Handy!)
+        items = self._parsePage(req)
+
+        return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
+
+    def storeFile(self, filename, label = None):
+        """
+        """
+        # TODO: Handle files larger than single attachment size.
+        # TODO: Allow file data objects to be supplied?
+        FILE_STORE_VERSION = "FSV_01"
+        FILE_STORE_SUBJECT_TEMPLATE = "%s %s" % (FILE_STORE_VERSION, "%s")
+
+        subject = FILE_STORE_SUBJECT_TEMPLATE % os.path.basename(filename)
+
+        msg = GmailComposedMessage(to="", subject=subject, body="",
+                                   filenames=[filename])
+
+        draftMsg = self.sendMessage(msg, asDraft = True)
+
+        if draftMsg and label:
+            draftMsg.addLabel(label)
+
+        return draftMsg
+
+    ## CONTACTS SUPPORT
+    def getContacts(self):
+        """
+        Returns a GmailContactList object
+        that has all the contacts in it as
+        GmailContacts
+        """
+        contactList = []
+        # pnl = a is necessary to get *all* contacts
+        myUrl = _buildURL(view='cl',search='contacts', pnl='a')
+        myData = self._parsePage(myUrl)
+        # This comes back with a dictionary
+        # with entry 'cl'
+        addresses = myData['cl']
+        for entry in addresses:
+            if len(entry) >= 6 and entry[0]=='ce':
+                newGmailContact = GmailContact(entry[1], entry[2], entry[4], entry[5])
+                #### new code used to get all the notes 
+                #### not used yet due to lockdown problems
+                ##rawnotes = self._getSpecInfo(entry[1])
+                ##print rawnotes
+                ##newGmailContact = GmailContact(entry[1], entry[2], entry[4],rawnotes)
+                contactList.append(newGmailContact)
+
+        return GmailContactList(contactList)
+
+    def addContact(self, myContact, *extra_args):
+        """
+        Attempts to add a GmailContact to the gmail
+        address book. Returns true if successful,
+        false otherwise
+
+        Please note that after version 0.1.3.3,
+        addContact takes one argument of type
+        GmailContact, the contact to add.
+
+        The old signature of:
+        addContact(name, email, notes='') is still
+        supported, but deprecated. 
+        """
+        if len(extra_args) > 0:
+            # The user has passed in extra arguments
+            # He/she is probably trying to invoke addContact
+            # using the old, deprecated signature of:
+            # addContact(self, name, email, notes='')        
+            # Build a GmailContact object and use that instead
+            (name, email) = (myContact, extra_args[0])
+            if len(extra_args) > 1:
+                notes = extra_args[1]
+            else:
+                notes = ''
+            myContact = GmailContact(-1, name, email, notes)
+
+        # TODO: In the ideal world, we'd extract these specific
+        # constants into a nice constants file
+        
+        # This mostly comes from the Johnvey Gmail API,
+        # but also from the gmail.py cited earlier
+        myURL = _buildURL(view='up')        
+
+        myDataList =  [ ('act','ec'),
+                        ('at', self.getCookie(ACTION_TOKEN_COOKIE)),
+                        ('ct_nm', myContact.getName()),
+                        ('ct_em', myContact.getEmail()),
+                        ('ct_id', -1 )
+                       ]
+
+        notes = myContact.getNotes()
+        if notes != '':
+            myDataList.append( ('ctf_n', notes) )
+
+        validinfokeys = [
+                        'i', # IM
+                        'p', # Phone
+                        'd', # Company
+                        'a', # ADR
+                        'e', # Email
+                        'm', # Mobile
+                        'b', # Pager
+                        'f', # Fax
+                        't', # Title
+                        'o', # Other
+                        ]
+
+        moreInfo = myContact.getMoreInfo()
+        ctsn_num = -1
+        if moreInfo != {}:
+            for ctsf,ctsf_data in moreInfo.items():
+                ctsn_num += 1
+                # data section header, WORK, HOME,...
+                sectionenum ='ctsn_%02d' % ctsn_num
+                myDataList.append( ( sectionenum, ctsf ))
+                ctsf_num = -1
+
+                if isinstance(ctsf_data[0],str):
+                    ctsf_num += 1
+                    # data section
+                    subsectionenum = 'ctsf_%02d_%02d_%s' % (ctsn_num, ctsf_num, ctsf_data[0])  # ie. ctsf_00_01_p
+                    myDataList.append( (subsectionenum, ctsf_data[1]) )
+                else:
+                    for info in ctsf_data:
+                        if validinfokeys.count(info[0]) > 0:
+                            ctsf_num += 1
+                            # data section
+                            subsectionenum = 'ctsf_%02d_%02d_%s' % (ctsn_num, ctsf_num, info[0])  # ie. ctsf_00_01_p
+                            myDataList.append( (subsectionenum, info[1]) )
+
+        myData = urllib.urlencode(myDataList)
+        request = ClientCookie.Request(myURL,
+                                       data = myData)
+        pageData = self._retrievePage(request)
+
+        if pageData.find("The contact was successfully added") == -1:
+            print pageData
+            if pageData.find("already has the email address") > 0:
+                raise Exception("Someone with same email already exists in Gmail.")
+            elif pageData.find("https://www.google.com/accounts/ServiceLogin";):
+                raise Exception("Login has expired.")
+            return False
+        else:
+            return True
+
+    def _removeContactById(self, id):
+        """
+        Attempts to remove the contact that occupies
+        id "id" from the gmail address book.
+        Returns True if successful,
+        False otherwise.
+
+        This is a little dangerous since you don't really
+        know who you're deleting. Really,
+        this should return the name or something of the
+        person we just killed.
+
+        Don't call this method.
+        You should be using removeContact instead.
+        """
+        myURL = _buildURL(search='contacts', ct_id = id, c=id, act='dc', at=self.getCookie(ACTION_TOKEN_COOKIE), view='up')
+        pageData = self._retrievePage(myURL)
+
+        if pageData.find("The contact has been deleted") == -1:
+            return False
+        else:
+            return True
+
+    def removeContact(self, gmailContact):
+        """
+        Attempts to remove the GmailContact passed in
+        Returns True if successful, False otherwise.
+        """
+        # Let's re-fetch the contact list to make
+        # sure we're really deleting the guy
+        # we think we're deleting
+        newContactList = self.getContacts()
+        newVersionOfPersonToDelete = newContactList.getContactById(gmailContact.getId())
+        # Ok, now we need to ensure that gmailContact
+        # is the same as newVersionOfPersonToDelete
+        # and then we can go ahead and delete him/her
+        if (gmailContact == newVersionOfPersonToDelete):
+            return self._removeContactById(gmailContact.getId())
+        else:
+            # We have a cache coherency problem -- someone
+            # else now occupies this ID slot.
+            # TODO: Perhaps signal this in some nice way
+            #       to the end user?
+            
+            print "Unable to delete."
+            print "Has someone else been modifying the contacts list while we have?"
+            print "Old version of person:",gmailContact
+            print "New version of person:",newVersionOfPersonToDelete
+            return False
+
+## Don't remove this. contact stas
+##    def _getSpecInfo(self,id):
+##        """
+##        Return all the notes data.
+##        This is currently not used due to the fact that it requests pages in 
+##        a dos attack manner.
+##        """
+##        myURL =_buildURL(search='contacts',ct_id=id,c=id,\
+##                        at=self._cookieJar._cookies['GMAIL_AT'],view='ct')
+##        pageData = self._retrievePage(myURL)
+##        myData = self._parsePage(myURL)
+##        #print "\nmyData form _getSpecInfo\n",myData
+##        rawnotes = myData['cov'][7]
+##        return rawnotes
+
+class GmailContact:
+    """
+    Class for storing a Gmail Contacts list entry
+    """
+    def __init__(self, name, email, *extra_args):
+        """
+        Returns a new GmailContact object
+        (you can then call addContact on this to commit
+         it to the Gmail addressbook, for example)
+
+        Consider calling setNotes() and setMoreInfo()
+        to add extended information to this contact
+        """
+        # Support populating other fields if we're trying
+        # to invoke this the old way, with the old constructor
+        # whose signature was __init__(self, id, name, email, notes='')
+        id = -1
+        notes = ''
+   
+        if len(extra_args) > 0:
+            (id, name) = (name, email)
+            email = extra_args[0]
+            if len(extra_args) > 1:
+                notes = extra_args[1]
+            else:
+                notes = ''
+
+        self.id = id
+        self.name = name
+        self.email = email
+        self.notes = notes
+        self.moreInfo = {}
+    def __str__(self):
+        return "%s %s %s %s" % (self.id, self.name, self.email, self.notes)
+    def __eq__(self, other):
+        if not isinstance(other, GmailContact):
+            return False
+        return (self.getId() == other.getId()) and \
+               (self.getName() == other.getName()) and \
+               (self.getEmail() == other.getEmail()) and \
+               (self.getNotes() == other.getNotes())
+    def getId(self):
+        return self.id
+    def getName(self):
+        return self.name
+    def getEmail(self):
+        return self.email
+    def getNotes(self):
+        return self.notes
+    def setNotes(self, notes):
+        """
+        Sets the notes field for this GmailContact
+        Note that this does NOT change the note
+        field on Gmail's end; only adding or removing
+        contacts modifies them
+        """
+        self.notes = notes
+
+    def getMoreInfo(self):
+        return self.moreInfo
+    def setMoreInfo(self, moreInfo):
+        """
+        moreInfo format
+        ---------------
+        Use special key values::
+                        'i' =  IM
+                        'p' =  Phone
+                        'd' =  Company
+                        'a' =  ADR
+                        'e' =  Email
+                        'm' =  Mobile
+                        'b' =  Pager
+                        'f' =  Fax
+                        't' =  Title
+                        'o' =  Other
+
+        Simple example::
+
+        moreInfo = {'Home': ( ('a','852 W Barry'),
+                              ('p', '1-773-244-1980'),
+                              ('i', 'aim:brianray34') ) }
+
+        Complex example::
+
+        moreInfo = {
+            'Personal': (('e', 'Home Email'),
+                         ('f', 'Home Fax')),
+            'Work': (('d', 'Sample Company'),
+                     ('t', 'Job Title'),
+                     ('o', 'Department: Department1'),
+                     ('o', 'Department: Department2'),
+                     ('p', 'Work Phone'),
+                     ('m', 'Mobile Phone'),
+                     ('f', 'Work Fax'),
+                     ('b', 'Pager')) }
+        """
+        self.moreInfo = moreInfo 
+    def getVCard(self):
+        """Returns a vCard 3.0 for this
+        contact, as a string"""
+        # The \r is is to comply with the RFC2425 section 5.8.1
+        vcard = "BEGIN:VCARD\r\n"
+        vcard += "VERSION:3.0\r\n"
+        ## Deal with multiline notes
+        ##vcard += "NOTE:%s\n" % self.getNotes().replace("\n","\\n")
+        vcard += "NOTE:%s\r\n" % self.getNotes()
+        # Fake-out N by splitting up whatever we get out of getName
+        # This might not always do 'the right thing'
+        # but it's a *reasonable* compromise
+        fullname = self.getName().split()
+        fullname.reverse()
+        vcard += "N:%s" % ';'.join(fullname) + "\r\n"
+        vcard += "FN:%s\r\n" % self.getName()
+        vcard += "EMAIL;TYPE=INTERNET:%s\r\n" % self.getEmail()
+        vcard += "END:VCARD\r\n\r\n"
+        # Final newline in case we want to put more than one in a file
+        return vcard
+
+class GmailContactList:
+    """
+    Class for storing an entire Gmail contacts list
+    and retrieving contacts by Id, Email address, and name
+    """
+    def __init__(self, contactList):
+        self.contactList = contactList
+    def __str__(self):
+        return '\n'.join([str(item) for item in self.contactList])
+    def getCount(self):
+        """
+        Returns number of contacts
+        """
+        return len(self.contactList)
+    def getAllContacts(self):
+        """
+        Returns an array of all the
+        GmailContacts
+        """
+        return self.contactList
+    def getContactByName(self, name):
+        """
+        Gets the first contact in the
+        address book whose name is 'name'.
+
+        Returns False if no contact
+        could be found
+        """
+        nameList = self.getContactListByName(name)
+        if len(nameList) > 0:
+            return nameList[0]
+        else:
+            return False
+    def getContactByEmail(self, email):
+        """
+        Gets the first contact in the
+        address book whose name is 'email'.
+        As of this writing, Gmail insists
+        upon a unique email; i.e. two contacts
+        cannot share an email address.
+
+        Returns False if no contact
+        could be found
+        """
+        emailList = self.getContactListByEmail(email)
+        if len(emailList) > 0:
+            return emailList[0]
+        else:
+            return False
+    def getContactById(self, myId):
+        """
+        Gets the first contact in the
+        address book whose id is 'myId'.
+
+        REMEMBER: ID IS A STRING
+
+        Returns False if no contact
+        could be found
+        """
+        idList = self.getContactListById(myId)
+        if len(idList) > 0:
+            return idList[0]
+        else:
+            return False
+    def getContactListByName(self, name):
+        """
+        This function returns a LIST
+        of GmailContacts whose name is
+        'name'. 
+
+        Returns an empty list if no contacts
+        were found
+        """
+        nameList = []
+        for entry in self.contactList:
+            if entry.getName() == name:
+                nameList.append(entry)
+        return nameList
+    def getContactListByEmail(self, email):
+        """
+        This function returns a LIST
+        of GmailContacts whose email is
+        'email'. As of this writing, two contacts
+        cannot share an email address, so this
+        should only return just one item.
+        But it doesn't hurt to be prepared?
+
+        Returns an empty list if no contacts
+        were found
+        """
+        emailList = []
+        for entry in self.contactList:
+            if entry.getEmail() == email:
+                emailList.append(entry)
+        return emailList
+    def getContactListById(self, myId):
+        """
+        This function returns a LIST
+        of GmailContacts whose id is
+        'myId'. We expect there only to
+        be one, but just in case!
+
+        Remember: ID IS A STRING
+
+        Returns an empty list if no contacts
+        were found
+        """
+        idList = []
+        for entry in self.contactList:
+            if entry.getId() == myId:
+                idList.append(entry)
+        return idList
+    def search(self, searchTerm):
+       """
+       This function returns a LIST
+       of GmailContacts whose name or
+       email address matches the 'searchTerm'.
+
+       Returns an empty list if no matches
+       were found.
+       """
+       searchResults = []
+       for entry in self.contactList:
+           p = re.compile(searchTerm, re.IGNORECASE)
+           if p.search(entry.getName()) or p.search(entry.getEmail()):
+               searchResults.append(entry)
+       return searchResults
+   
+class GmailSearchResult:
+    """
+    """
+
+    def __init__(self, account, search, threadsInfo):
+        """
+
+        `threadsInfo` -- As returned from Gmail but unbunched.
+        """
+        #print "\nthreadsInfo\n",threadsInfo
+        try:
+            if not type(threadsInfo[0]) is types.ListType:
+                threadsInfo = [threadsInfo]
+        except IndexError:
+            # print "No messages found"
+            pass            
+            
+        self._account = account
+        self.search = search # TODO: Turn into object + format nicely.
+        self._threads = []
+        
+        for thread in threadsInfo:
+            self._threads.append(GmailThread(self, thread[0]))
+
+
+    def __iter__(self):
+        """
+        """
+        return iter(self._threads)
+
+    def __len__(self):
+        """
+        """
+        return len(self._threads)
+
+    def __getitem__(self,key):
+        """
+        """
+        return self._threads.__getitem__(key)
+
+
+class GmailSessionState:
+    """
+    """
+
+    def __init__(self, account = None, filename = ""):
+        """
+        """
+        if account:
+            self.state = (account.name, account._cookieJar)
+        elif filename:
+            self.state = load(open(filename, "rb"))
+        else:
+            raise ValueError("GmailSessionState must be instantiated with " \
+                             "either GmailAccount object or filename.")
+
+
+    def save(self, filename):
+        """
+        """
+        dump(self.state, open(filename, "wb"), -1)
+
+
+class _LabelHandlerMixin(object):
+    """
+
+    Note: Because a message id can be used as a thread id this works for
+          messages as well as threads.
+    """
+    def __init__(self):
+        self._labels = None
+        
+    def _makeLabelList(self, labelList):
+        self._labels = labelList
+    
+    def addLabel(self, labelName):
+        """
+        """
+        # Note: It appears this also automatically creates new labels.
+        result = self._account._doThreadAction(U_ADDCATEGORY_ACTION+labelName,
+                                               self)
+        if not self._labels:
+            self._makeLabelList([])
+        # TODO: Caching this seems a little dangerous; suppress duplicates maybe?
+        self._labels.append(labelName)
+        return result
+
+
+    def removeLabel(self, labelName):
+        """
+        """
+        # TODO: Check label is already attached?
+        # Note: An error is not generated if the label is not already attached.
+        result = \
+               self._account._doThreadAction(U_REMOVECATEGORY_ACTION+labelName,
+                                             self)
+        
+        removeLabel = True
+        try:
+            self._labels.remove(labelName)
+        except:
+            removeLabel = False
+            pass
+    
+        # If we don't check both, we might end up in some weird inconsistent state
+        return result and removeLabel
+
+    def getLabels(self):
+        return self._labels
+    
+
+
+class GmailThread(_LabelHandlerMixin):
+    """
+    Note: As far as I can tell, the "canonical" thread id is always the same
+          as the id of the last message in the thread. But it appears that
+          the id of any message in the thread can be used to retrieve
+          the thread information.
+    
+    """
+
+    def __init__(self, parent, threadsInfo):
+        """
+        """
+        _LabelHandlerMixin.__init__(self)
+        
+        # TODO Handle this better?
+        self._parent = parent
+        self._account = self._parent._account
+        
+        self.id = threadsInfo[T_THREADID] # TODO: Change when canonical updated?
+        self.subject = threadsInfo[T_SUBJECT_HTML]
+
+        self.snippet = threadsInfo[T_SNIPPET_HTML]
+        #self.extraSummary = threadInfo[T_EXTRA_SNIPPET] #TODO: What is this?
+
+        # TODO: Store other info?
+        # Extract number of messages in thread/conversation.
+
+        self._authors = threadsInfo[T_AUTHORS_HTML]
+        self.info = threadsInfo
+    
+        try:
+            # TODO: Find out if this information can be found another way...
+            #       (Without another page request.)
+            self._length = int(re.search("\((\d+?)\)\Z",
+                                         self._authors).group(1))
+        except AttributeError,info:
+            # If there's no message count then the thread only has one message.
+            self._length = 1
+
+        # TODO: Store information known about the last message  (e.g. id)?
+        self._messages = []
+
+        # Populate labels
+        self._makeLabelList(threadsInfo[T_CATEGORIES])
+
+    def __getattr__(self, name):
+        """
+        Dynamically dispatch some interesting thread properties.
+        """
+        attrs = { 'unread': T_UNREAD,
+                  'star': T_STAR,
+                  'date': T_DATE_HTML,
+                  'authors': T_AUTHORS_HTML,
+                  'flags': T_FLAGS,
+                  'subject': T_SUBJECT_HTML,
+                  'snippet': T_SNIPPET_HTML,
+                  'categories': T_CATEGORIES,
+                  'attach': T_ATTACH_HTML,
+                  'matching_msgid': T_MATCHING_MSGID,
+                  'extra_snippet': T_EXTRA_SNIPPET }
+        if name in attrs:
+            return self.info[ attrs[name] ];
+
+        raise AttributeError("no attribute %s" % name)
+        
+    def __len__(self):
+        """
+        """
+        return self._length
+
+
+    def __iter__(self):
+        """
+        """
+        if not self._messages:
+            self._messages = self._getMessages(self)
+            
+        return iter(self._messages)
+
+    def __getitem__(self, key):
+        """
+        """
+        if not self._messages:
+            self._messages = self._getMessages(self)
+        try:
+            result = self._messages.__getitem__(key)
+        except IndexError:
+            result = []
+        return result
+
+    def _getMessages(self, thread):
+        """
+        """
+        # TODO: Do this better.
+        # TODO: Specify the query folder using our specific search?
+        items = self._account._parseSearchResult(U_QUERY_SEARCH,
+                                                 view = U_CONVERSATION_VIEW,
+                                                 th = thread.id,
+                                                 q = "in:anywhere")
+        result = []
+        # TODO: Handle this better?
+        # Note: This handles both draft & non-draft messages in a thread...
+        for key, isDraft in [(D_MSGINFO, False), (D_DRAFTINFO, True)]:
+            try:
+                msgsInfo = items[key]
+            except KeyError:
+                # No messages of this type (e.g. draft or non-draft)
+                continue
+            else:
+                # TODO: Handle special case of only 1 message in thread better?
+                if type(msgsInfo[0]) != types.ListType:
+                    msgsInfo = [msgsInfo]
+                for msg in msgsInfo:
+                    result += [GmailMessage(thread, msg, isDraft = isDraft)]
+                           
+
+        return result
+
+class GmailMessageStub(_LabelHandlerMixin):
+    """
+
+    Intended to be used where not all message information is known/required.
+
+    NOTE: This may go away.
+    """
+
+    # TODO: Provide way to convert this to a full `GmailMessage` instance
+    #       or allow `GmailMessage` to be created without all info?
+
+    def __init__(self, id = None, _account = None):
+        """
+        """
+        _LabelHandlerMixin.__init__(self)
+        self.id = id
+        self._account = _account
+    
+
+        
+class GmailMessage(object):
+    """
+    """
+    
+    def __init__(self, parent, msgData, isDraft = False):
+        """
+
+        Note: `msgData` can be from either D_MSGINFO or D_DRAFTINFO.
+        """
+        # TODO: Automatically detect if it's a draft or not?
+        # TODO Handle this better?
+        self._parent = parent
+        self._account = self._parent._account
+        
+        self.author = msgData[MI_AUTHORFIRSTNAME]
+        self.id = msgData[MI_MSGID]
+        self.number = msgData[MI_NUM]
+        self.subject = msgData[MI_SUBJECT]
+        self.to = msgData[MI_TO]
+        self.cc = msgData[MI_CC]
+        self.bcc = msgData[MI_BCC]
+        self.sender = msgData[MI_AUTHOREMAIL]
+        
+        self.attachments = [GmailAttachment(self, attachmentInfo)
+                            for attachmentInfo in msgData[MI_ATTACHINFO]]
+
+        # TODO: Populate additional fields & cache...(?)
+
+        # TODO: Handle body differently if it's from a draft?
+        self.isDraft = isDraft
+        
+        self._source = None
+
+
+    def _getSource(self):
+        """
+        """
+        if not self._source:
+            # TODO: Do this more nicely...?
+            # TODO: Strip initial white space & fix up last line ending
+            #       to make it legal as per RFC?
+            self._source = self._account.getRawMessage(self.id)
+
+        return self._source
+
+    source = property(_getSource, doc = "")
+        
+
+
+class GmailAttachment:
+    """
+    """
+
+    def __init__(self, parent, attachmentInfo):
+        """
+        """
+        # TODO Handle this better?
+        self._parent = parent
+        self._account = self._parent._account
+
+        self.id = attachmentInfo[A_ID]
+        self.filename = attachmentInfo[A_FILENAME]
+        self.mimetype = attachmentInfo[A_MIMETYPE]
+        self.filesize = attachmentInfo[A_FILESIZE]
+
+        self._content = None
+
+
+    def _getContent(self):
+        """
+        """
+        if not self._content:
+            # TODO: Do this a more nicely...?
+            self._content = self._account._retrievePage(
+                _buildURL(view=U_ATTACHMENT_VIEW, disp="attd",
+                          attid=self.id, th=self._parent._parent.id))
+            
+        return self._content
+
+    content = property(_getContent, doc = "")
+
+
+    def _getFullId(self):
+        """
+
+        Returns the "full path"/"full id" of the attachment. (Used
+        to refer to the file when forwarding.)
+
+        The id is of the form: "<thread_id>_<msg_id>_<attachment_id>"
+        
+        """
+        return "%s_%s_%s" % (self._parent._parent.id,
+                             self._parent.id,
+                             self.id)
+
+    _fullId = property(_getFullId, doc = "")
+
+
+
+class GmailComposedMessage:
+    """
+    """
+
+    def __init__(self, to, subject, body, cc = None, bcc = None,
+                 filenames = None, files = None):
+        """
+
+          `filenames` - list of the file paths of the files to attach.
+          `files` - list of objects implementing sub-set of
+                    `email.Message.Message` interface (`get_filename`,
+                    `get_content_type`, `get_payload`). This is to
+                    allow use of payloads from Message instances.
+                    TODO: Change this to be simpler class we define ourselves?
+        """
+        self.to = to
+        self.subject = subject
+        self.body = body
+        self.cc = cc
+        self.bcc = bcc
+        self.filenames = filenames
+        self.files = files
+
+
+
+if __name__ == "__main__":
+    import sys
+    from getpass import getpass
+
+    try:
+        name = sys.argv[1]
+    except IndexError:
+        name = raw_input("Gmail account name: ")
+        
+    pw = getpass("Password: ")
+    domain = raw_input("Domain? [leave blank for Gmail]: ")
+
+    ga = GmailAccount(name, pw, domain=domain)
+
+    print "\nPlease wait, logging in..."
+
+    try:
+        ga.login()
+    except GmailLoginFailure,e:
+        print "\nLogin failed. (%s)" % e.message
+    else:
+        print "Login successful.\n"
+
+        # TODO: Use properties instead?
+        quotaInfo = ga.getQuotaInfo()
+        quotaMbUsed = quotaInfo[QU_SPACEUSED]
+        quotaMbTotal = quotaInfo[QU_QUOTA]
+        quotaPercent = quotaInfo[QU_PERCENT]
+        print "%s of %s used. (%s)\n" % (quotaMbUsed, quotaMbTotal, quotaPercent)
+
+        searches = STANDARD_FOLDERS + ga.getLabelNames()
+        name = None
+        while 1:
+            try:
+                print "Select folder or label to list: (Ctrl-C to exit)"
+                for optionId, optionName in enumerate(searches):
+                    print "  %d. %s" % (optionId, optionName)
+                while not name:
+                    try:
+                        name = searches[int(raw_input("Choice: "))]
+                    except ValueError,info:
+                        print info
+                        name = None
+                if name in STANDARD_FOLDERS:
+                    result = ga.getMessagesByFolder(name, True)
+                else:
+                    result = ga.getMessagesByLabel(name, True)
+                    
+                if not len(result):
+                    print "No threads found in `%s`." % name
+                    break
+                name = None
+                tot = len(result)
+                
+                i = 0
+                for thread in result:
+                    print "%s messages in thread" % len(thread)
+                    print thread.id, len(thread), thread.subject
+                    for msg in thread:
+                        print "\n ", msg.id, msg.number, msg.author,msg.subject
+                        # Just as an example of other usefull things
+                        #print " ", msg.cc, msg.bcc,msg.sender
+                        i += 1
+                print
+                print "number of threads:",tot
+                print "number of messages:",i
+            except KeyboardInterrupt:
+                break
+            
+    print "\n\nDone."

Added: trunk/bigboard/libgmail/mkconstants.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/mkconstants.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,92 @@
+#!/usr/bin/env python
+#
+# mkconstants.py -- Extract constants from Gmail Javascript code
+#
+# $Revision: 1.11 $ ($Date: 2005/08/16 06:43:47 $)
+#
+# Author: follower myrealbox com
+#
+# License: GPL 2.0
+#
+# This tool parses the Javascript file used by Gmail, extracts
+# useful constants and then generates an importable Python module.
+#
+# 2004-07-11: Hmmm, this script is not really any use now because
+#             Gmail no longer includes the constants definitions
+#             in the Javascript...
+#
+
+import re
+import sys
+import time
+
+OUTPUT_FILENAME = "lgconstants.py"
+
+# These enumerations start at 1 rather than 0 -- I haven't looked into
+# why they're are different. We want them to work correctly for Python
+# sequences so we have to fudge them and subtract one from each value.
+# NOTE: This means we can't send these values back, but that shouldn't be
+#       a problem.
+FUDGE_OFFSET_PREFIXES = ["QU", "TS", "CS", "MI", "SM", "AR"]
+
+# Used to filter out only the constants we want to use at the moment.
+USEFUL_PREFIXES = ["D", "T", "CT", "A"] + FUDGE_OFFSET_PREFIXES
+USEFUL_SUFFIXES = ["SEARCH", "START", "VIEW", "COOKIE", "THREAD", "ACTION"]
+USEFUL_NAMES = ["U_REFERENCED_MSG", "U_DRAFT_MSG"]
+RE_CONSTANTS = "var ([A-Z]{1,}_[A-Z_]+?)=(.+?);"
+
+VAR_JS_VERSION = "js_version"
+
+FMT_DEFINITION = "%s = %s\n"
+
+FILE_HEADER = """\
+#
+# Generated file -- DO NOT EDIT
+#
+# %s -- Useful constants extracted from Gmail Javascript code
+#
+# Source version: %s
+#
+# Generated: %s
+#
+
+""" % (OUTPUT_FILENAME, "%s",
+       time.strftime("%Y-%m-%d %H:%M UTC", time.gmtime()))
+
+if __name__ == "__main__":
+    lines = []
+
+    try:
+        inputFilename = sys.argv[1]
+    except IndexError:
+        print "Usage: mkconstants.py <gmail.js>"
+        raise SystemExit
+
+    print "Reading `%s`..." % inputFilename
+    code = open(inputFilename).read()
+
+    jsVersion = re.search("var %s=(.+?);" % VAR_JS_VERSION, code).group(1)
+
+    lines.extend([FMT_DEFINITION % (VAR_JS_VERSION, jsVersion), "\n"])
+
+    matches = re.findall(RE_CONSTANTS, code)
+
+    for name, value in matches:
+        prefix = name[:name.index("_")]
+        suffix = name[name.rindex("_")+1:]
+
+        if prefix in USEFUL_PREFIXES or suffix in USEFUL_SUFFIXES or \
+               name.startswith("U_AS_") or name.startswith("U_COMPOSE") or \
+               name.startswith("U_ACTION_") or \
+               name in USEFUL_NAMES:
+            if prefix in FUDGE_OFFSET_PREFIXES:
+                value = int(value) - 1
+            lines.append(FMT_DEFINITION % (name, value))
+
+    lines.insert(0, FILE_HEADER % jsVersion.strip("'"))
+
+    print "Writing `%s`..." % OUTPUT_FILENAME
+    open(OUTPUT_FILENAME, "w").writelines(lines)
+
+    print "Done."
+    

Added: trunk/bigboard/libgmail/setup.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/setup.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,19 @@
+#!/usr/bin/env python
+
+# Setup script for the libgmail package
+# Usage: 
+# To create a source package; python setup.py sdist
+# To install to your system; python setup.py install
+import libgmail
+from distutils.core import setup
+mods = ['libgmail','lgconstants']
+setup (name = "libgmail",
+       version = "%s" % libgmail.Version,
+       description = "python bindings to access Gmail",
+       author = "wdaher mit edu,stas linux isbeter nl,follower myrealbox com",
+       author_email = "libgmail-developer lists sf net",
+       url = "http://libgmail.sourceforge.net/";,
+       license = "GPL",
+       py_modules = mods,
+      )
+

Added: trunk/bigboard/libgmail/test_contacts.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/test_contacts.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,143 @@
+#!/usr/bin/env python
+
+import unittest
+import getpass
+
+from libgmail_new import *
+from lgconstants import *
+from lgcontacts import GContacts,GmailContact,GmailContactList
+
+
+class ContactsTests(unittest.TestCase):
+    """
+    Set of tests that exercise the contacts portion of libgmail
+    """
+    def setUp(self):
+        """
+        Delete all entries in the
+        addressbook so we start fresh
+        """
+        #print "Setting up!"
+        contacts = GC.getContacts()
+        for contact in contacts.getAllContacts():
+            #print "Removing", contact
+            GC.removeContact(contact)
+
+    def tearDown(self):
+        """
+        Delete all entries in the
+        addressbook so we start fresh
+        """
+        #print "Tearing down!"
+        contacts = GC.getContacts()
+        for contact in contacts.getAllContacts():
+            #print "Removing", contact
+            GC.removeContact(contact)
+
+
+    def test1_BasicAddContact(self):
+        """Create and retrieve an entry-level contact"""
+        name = 'John Smith'
+        email = 'john smith gmail com'
+        notes = 'I am average'
+        GC.addContact(name, email, notes)
+        myContacts = GC.getContacts()
+        contact = myContacts.getContactByName(name)
+        self.assertEqual(contact.getName(), name, "Returned name isn't the one we created initially")
+        self.assertEqual(contact.getEmail(), email, "Returned email isn't the one we created initially")
+        self.assertEqual(contact.getNotes(), notes, "Returned note isn't the one we created initially")
+
+    def test3_GmailContact(self):
+        """Check that GmailContact equality and accessor methods work"""
+        w = GmailContact('a','b','c','d')
+        x = GmailContact('x','y','z')
+        y = GmailContact('a','b','c','d')
+        z = GmailContact('a','b','c','d')
+
+        self.assertEqual(w,w, "%s doesn't equals %s" % (w,w))
+        self.assertEqual(x,x, "%s doesn't equals %s" % (x,x))
+        self.assertEqual(y,y, "%s doesn't equals %s" % (y,y))
+        self.assertEqual(z,z, "%s doesn't equals %s" % (z,z))
+        self.assertEqual(w,y, "%s doesn't equals %s" % (w,y))
+        self.assertEqual(y,z, "%s doesn't equals %s" % (y,z))
+        self.assertEqual(w,z, "%s doesn't equals %s" % (w,z))
+        self.assertEqual(z,w, "%s doesn't equals %s" % (z,w))
+
+        self.assertNotEqual(w,x, "%s shouldn't equals %s" % (w,x))
+        self.assertNotEqual(x,w, "%s shouldn't equals %s" % (x,w))
+        
+        i,a,e,n = w.getId(),w.getName(),w.getEmail(),w.getNotes()
+        self.assertEqual(i, 'a', "%s doesn't equals 'a'" % i)
+        self.assertEqual(a, 'b', "%s doesn't equals 'b'" % a)
+        self.assertEqual(e, 'c', "%s doesn't equals 'c'" % e)
+        self.assertEqual(n, 'd', "%s doesn't equals 'd'" % n)
+
+        self.assertEqual(x.getNotes(), '', "getNotes() should return ''")
+
+    def test4_GetBy(self):
+        """Get a contact by name, email, and id"""
+        GC.addContact('Waseem', 'wdaher gmail com', 'Is awesome')
+        myContacts = GC.getContacts()
+        waseem = myContacts.getContactByName('Waseem')
+        self.assertEqual(waseem, myContacts.getContactByEmail('wdaher gmail com'))
+        self.assertEqual(waseem, myContacts.getContactById(waseem.getId()))
+
+    def test5_GetByLists(self):
+        """Get a contact list by name, email, and id"""
+        GC.addContact('Waseem', 'wdaher gmail com', 'Is awesome')
+        GC.addContact('Daher', 'test foo bar')
+        myContacts = GC.getContacts()
+        waseem = myContacts.getContactByName('Waseem')
+        
+        result = [waseem]
+        obj = myContacts.getContactListByName('Waseem')
+        self.assertEqual(result, obj, "%s doesn't equals %s" % (result,obj))
+        obj = myContacts.getContactListByEmail('wdaher gmail com')
+        self.assertEqual([waseem], obj,"%s doesn't equals %s" % (result,obj))
+        obj = myContacts.getContactListById(waseem.getId())
+        self.assertEqual([waseem], obj,"%s doesn't equals %s" % (result,obj))
+
+    def test6_SmallGetAndRemove(self):
+        """Add one address and remove it again"""
+        count = 1
+        # Add some
+        for x in range(count):
+            GC.addContact(str(x), str(x))
+        myContactList = GC.getContacts()
+        self.assertEqual(myContactList.getCount(), count)
+
+        # Now remove them all
+        for x in range(count):
+            self.assertEqual(True, GC.removeContact(myContactList.getContactByName(str(x))))
+        myContactList = GC.getContacts()
+        self.assertEqual(myContactList.getCount(), 0)
+
+
+if __name__ == '__main__':
+    
+    print "\n=============================================="
+    print "Start 'libgmail_new contacts' testsuite"
+    print "------------------------------------------------\n"
+    print "WARNING: THIS WILL DELETE/REARRANGE"
+    print "         YOUR ADDRESSBOOK/EMAILS"
+    print " BE SURE TO RUN THIS TEST ONLY ON A 'test' ACCOUNT"
+    
+    name = raw_input("Gmail account name:")
+    pw = getpass.getpass("Password: ")
+    account = GmailAccount(name, pw)
+
+    try:
+        account.login()
+        print "Login successful.\n"
+    except GmailLoginFailure,e:
+        print "\nLogin failed. (%s)" % e.message
+    else:
+        GC = GContacts(account)
+        suite = unittest.TestSuite()
+        suite.addTest(unittest.makeSuite(ContactsTests))
+        unittest.TextTestRunner(verbosity=2).run(suite)
+
+
+print "\nDone"
+
+

Added: trunk/bigboard/libgmail/testlibgmail.py
==============================================================================
--- (empty file)
+++ trunk/bigboard/libgmail/testlibgmail.py	Mon May 12 18:42:13 2008
@@ -0,0 +1,85 @@
+#!/usr/bin/env python
+"""
+libgmail test suite
+
+Tests:
+Very little, at this point :)
+"""
+import unittest
+import time
+from libgmail import *
+import getpass
+
+class LibgmailTests(unittest.TestCase):
+    """
+    Set of tests that exercise very basic libgmail functionality
+    """
+    def setUp(self):
+        pass
+
+    def tearDown(self):
+        pass
+
+    def test_send_and_receive_mail(self):
+        if account.domain:
+            name = account.name + '@' + account.domain
+        else:
+            name = account.name + '@gmail.com'
+        subject = "libgmail test subject"
+        body = """
+        Hi, I am a unit test of libgmail. Ignore this message,
+        if you dare. Seriously, I won't be offended if you
+        ignore it. And you probably should, since right
+        now, the test suite doesn't delete this message
+        from your trash, sooo.... it'll just linger.
+
+        "You've got me wrapped around your finger /
+        did you have to let it linger?
+        did you have to?
+        did you have to?
+        did you have to let it linger?"
+
+        etc.
+        """
+        msg = GmailComposedMessage(to=name, subject=subject,
+                                   body=body)
+        output = account.sendMessage(msg)
+
+        # Now go to the inbox and attempt to retrieve
+        # this message
+        # Sleep for like, ten seconds, so that we can
+        # actually get the message
+        time.sleep(10)
+        result = account.getMessagesByFolder(U_INBOX_SEARCH)
+        # We'd better be in the first thread
+        thread = result[0]
+        first = thread[0]
+        self.assertEqual(first.subject, first.subject)
+        self.assertEqual(first.to[0], msg.to)
+        # Now send the message to the trash
+        account.trashMessage(first)
+
+if __name__ == '__main__':
+    #unittest.main()
+    ## With this we get a better output
+    print "\n=============================================="
+    print "Start 'libgmail' testsuite"
+    print "------------------------------------------------\n"
+    print "WARNING: THIS TEST MAY DELETE/REARRANGE"
+    print "         YOUR ADDRESSBOOK/EMAILS"
+    print "PLEASE DON'T RUN IT ON A REAL ACCOUNT"
+    
+    name = raw_input("Gmail account name: ")
+    pw = getpass.getpass("Password: ")
+    domain = raw_input("Domain [leave blank for gmail]: ")
+    account = GmailAccount(name, pw, domain=domain)
+
+    try:
+        account.login()
+        print "Login successful.\n"
+    except GmailLoginFailure,e:
+        print "\nLogin failed. (%s)" % e.message
+    else:
+        suite = unittest.TestSuite()
+        suite.addTest(unittest.makeSuite(LibgmailTests))
+        unittest.TextTestRunner(verbosity=2).run(suite)



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]