Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

URL Parsing Utility?



While we're doing a python lesson, I'd be interested in how to  solve
a  problem  with  this  code that I've seen with a lot of python, and
which doesn't seem to be covered too well in TFM. Maybe it's just too
trivial.   The  problem  is  that,  when given the URL on stdin, this
program produces two lines of output, not one.  The  second  line  is
blank.  This is, of course, silly, but lack of information on exactly
how to get such trivia correct can be a significant barier. I tend to
continue  using  perl, because when the input needs to be fed to some
other program that's picky about its input, I can control  the  white
space  exactly in perl.  With python, I can get the data right, but I
always seem to get silly extra white space like  this,  and  I  don't
have  a  good  enough  handle on python's char handling to understand
where it's coming from or how to Get it Right.

| #!/usr/bin/env python
| import sys, urllib
|
| if len(sys.argv) > 1:
|    ick = sys.argv[1]
| else:
|    ick = sys.stdin.readline()
|
| try:
|    print urllib.unquote(ick)
| except:
|    print "could not be unquoted."


jc at trillian.mit.edu, the John Chambers who long ago learned that anal
retentiveness is a required characteristic of a good programmer.

(And I've long argued that the most significant technical advance  in
perl5 was the chomp function.  ;-)





BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org