I'm trying to create this script that will check the computer host name then search a master list for the value to return a corresponding value in the csv file. Then open another file and do a find an replace. I know this should be easy but haven't done so much in python before. Here is what I have so far...
masterlist.txt (tab delimited)
Name UID
Bob-Smith.local bobs
Carmen-Jackson.local carmenj
David-Kathman.local davidk
Jenn-Roberts.local jennr
Here is the script that I have created thus far
#GET CLIENT HOST NAME
import socket
host = socket.gethostname()
print host
#IMPORT MASTER DATA
import csv, sys
filename = "masterlist.txt"
reader = csv.reader(open(filename, "rU"))
#PRINT MASTER DATA
for row in reader:
print row
#SEARCH ON HOSTNAME AND RETURN UID
#REPLACE VALUE IN FILE WITH UID
#import fileinput
#for line in fileinput.FileInput("filetoreplace",inplace=1):
# line = line.replace("replacethistext","UID")
# print line
Right now, it's just set to print the master list. I'm not sure if the list needs to be parsed and placed into a dictionary or what. I really need to figure out how to search the first field for the hostname and then return the field in the second column.
Thanks in advance for your help, Aaron
UPDATE: I removed line 194 and last line from masterlist.txt and then re-ran the script. The results were the following:
Traceback (most recent call last):
File "update.py", line 3, in for row in csv.DictReader(open(fname), delimiter='\t'): File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/csv.py", line 103, in next self.fieldnames File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/csv.py", line 90, in fieldnames self._fieldnames = self.reader.next() _csv.Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode?
The current script being used is...
import csv
fname = "masterlist.txt"
for row in csv.DictReader(open(fname), delimiter='\t'):
print(row)
The two occurrences of '\xD5' in line 194 and the last line have nothing to do with the problem.
The problem appears to be a bug, or a misleading error message, or incorrect/vague documentation, in the Python 2.6 csv module.
In the file, the lines are terminated by '\x0D' aka '\r' in the Classic Mac tradition. The last line is not terminated, but that is nothing to do with the problem.
The docs for csv.reader say "If csvfile is a file object, it must be opened with the ‘b’ flag on platforms where that makes a difference." It is widely known that it does make a difference on Windows. However opening the file with 'rb' or 'r' makes no difference in this case -- still the same error message.
The docs for csv.Dialect.lineterminator say "The string used to terminate lines produced by the writer. It defaults to '\r\n'. Note: The reader is hard-coded to recognise either '\r' or '\n' as end-of-line, and ignores lineterminator. This behavior may change in the future." It appears to be recognising '\r' as new-line but not as end-of-line/end-of-field.
The error message "_csv.Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode?" is confusing; it's recognised '\r' as a new-line, but it's not treating new-line as an end-of line (and thus implicitly end-of-field).
It appears necessary to open the file in 'rU' mode to get it to "work". It's not apparent why the same '\r' recognised in universal-newline mode is any better.
To get iterate over a reader you'd do:
>>> import csv
>>> for row in csv.DictReader(open(fname), delimiter='\t'):
print(row)
{'Name': 'Bob-Smith.local', 'UID': 'bobs'}
{'Name': 'Carmen-Jackson.local', 'UID': 'carmenj'}
{'Name': 'David-Kathman.local', 'UID': 'davidk'}
{'Name': 'Jenn-Roberts.local', 'UID': 'jennr'}
But since you want to associate Name
with UID
:
>>> reader = csv.reader(open("masterlist.txt"), delimiter='\t')
>>> _ = next(reader) # just discarding header
>>> d = dict(reader)
>>> d['Carmen-Jackson.local']
'carmenj'
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With