#1
  1. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Feb 2003
    Posts
    154
    Rep Power
    14

    Problem with Brill Tagger - Using NLTK library


    Having a small user community, for the library I'm encountering difficulties with, I feel I need to open up this problem to the wider communities in the hope that someone is able to offer some advice.

    Basically, I'm using a library called nltk, to perform some Natural Language Processing on some text files. Before giving details about the specifics of the problem, for anyone who wishes to help out, you will need to do the following steps first:

    1. Go to: http://nltk.sourceforge.net/install.html and install the appropriate version of the library.

    2. Check out: http://nltk.sourceforge.net/api-1.4/index.html for documentation on all classes within this library.

    I will now go on to explain the problem before supplying the code which you can copy and dump into a python file and then just run the .py file and see the error message for yourself!!!

    The problem I'm having difficulty with is making use of the Brill Tagger supplied with the nltk library. I seem to have run into trouble invoking the tag method of the 'BrillTagger' class.

    I've managed to train the brill tagger on the 'treebank' corpus, but when I come to invoke the 'tag' method, I receive the error 'KeyError: SUBTOKENS'. I can't seem to find the reason for it throwing up this error, even though I understand what the error is referring to. The error is basically indicating that the method: 'tag(self, token)' requires a token instance in order to assign pos tags and for some reason its not liking the variable I'm passing!!

    Below is the code I'm using which you can just copy and run directly to see the error message I receive.


    import re

    import sys

    sys.path.append('/home/csunix/extras/nltk/1.4.2/lib/python2.3/site-packages')
    sys.path.append('/home/csunix/extras/nltk/1.4.2/lib/python2.3/site-packages/Numeric')

    import nltk

    from nltk.tokenizer import *

    from nltk.corpus import SimpleCorpusReader

    from nltk.probability import FreqDist

    from nltk.parser import ParserI

    from nltk.stemmer.porter import *

    from nltk.tagger import *

    from nltk.tagger.brill import *

    from nltk.corpus import words as w, brown


    corpusTStext = "some text to be assigned part of speech tags. I am using a corpus but for this example might as well just use a small string of text"


    # Tokenize string to extract words

    corpusTStoken = Token(TEXT=corpusTStext)

    wstokenizer = WhitespaceTokenizer(SUBTOKENS='WORDS').tokenize(corpusTStoken)



    # Tokenize string to extract bi-grams

    # Create bi-grams constructed from current word and word adjacent to the left

    corpusTSnglhstoken = Token(TEXT=corpusTStext)

    pat = '\w+\s+\w+'

    RegexpTokenizer(pat, negative=False, SUBTOKENS='WORDS').tokenize(corpusTSnglhstoken)


    train_tokens = []

    items = treebank.items('tagged')
    for item in items[:100]:
    item = treebank.read(item)
    for sent in item['SENTS']:
    train_tokens += sent['WORDS']
    train_tokens = [train_tokens[i] for i in range(len(train_tokens))
    if train_tokens[i]['TEXT'][0] not in "[]="]

    #train_tokens.append(w.read('en_GB.dic'))

    trainCutoff = int(len(train_tokens)*0.8)
    train_tokens = Token(SUBTOKENS=train_tokens[0:trainCutoff])

    # Train a Unigram Tagger

    postagger = UnigramTagger(TAG='POS')
    postagger.train(train_tokens)


    # Train Brill Tagger

    templates = [
    SymmetricProximateTokensTemplate(ProximateTagsRule, (1,1)),
    SymmetricProximateTokensTemplate(ProximateTagsRule, (2,2)),
    SymmetricProximateTokensTemplate(ProximateTagsRule, (1,2)),
    SymmetricProximateTokensTemplate(ProximateTagsRule, (1,3)),
    SymmetricProximateTokensTemplate(ProximateWordsRule, (1,1)),
    SymmetricProximateTokensTemplate(ProximateWordsRule, (2,2)),
    SymmetricProximateTokensTemplate(ProximateWordsRule, (1,2)),
    SymmetricProximateTokensTemplate(ProximateWordsRule, (1,3)),
    ProximateTokensTemplate(ProximateTagsRule, (-1, -1), (1,1)),
    ProximateTokensTemplate(ProximateWordsRule, (-1, -1), (1,1))
    ]

    trace = 3

    brilltrainer = BrillTaggerTrainer(postagger, templates, trace, TAG='POS')

    brillrules = brilltrainer.train(train_tokens, max_rules=50, min_score=2)

    brillrules = brillrules.rules


    # (POS) Tag corpus training set (corpusTS)

    brilltagger = BrillTagger(postagger, brillrules)

    brilltagger.tag(corpusTStoken)


    tagwords = open("taggedwords.txt","w")

    for token in corpusTStoken['WORDS']:
    tagwords.write(token['TEXT'] + "/" + str(token['TAG']) + "\n")




    Will be very appreicative for any advice anyone is able to offer.

    Thanks in advance,

    Mark
  2. #2
  3. Mini me.
    Devshed Novice (500 - 999 posts)

    Join Date
    Nov 2003
    Location
    Cambridge, UK
    Posts
    783
    Rep Power
    13
    I won't try your code out until it's posted in a format we can use
    It would also help to post the traceback.

    I did read the documentation of the tag method for you - it suggests that the Token you pass must be a dictionary like object with a "SUBTOKENS" key.
    tag(self, token)
    Assign a tag to each subtoken in token['SUBTOKENS'], and write those tags to the subtokens' tag properties.
    So does
    Code:
    corpusTStoken = Token(TEXT=corpusTStext)
    always return a dictionary with a "SUBTOKENS" key?
    Have you tried printing corpusTStoken to see what's there?

    grim
  4. #3
  5. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Feb 2003
    Posts
    154
    Rep Power
    14
    I've encapsulated the code in a code tag like I believe you were requesting:

    Code:
    import re
    
    import sys
    
    sys.path.append('/home/csunix/extras/nltk/1.4.2/lib/python2.3/site-packages')
    sys.path.append('/home/csunix/extras/nltk/1.4.2/lib/python2.3/site-packages/Numeric')
    
    import nltk
    
    from nltk.tokenizer import *
    
    from nltk.corpus import SimpleCorpusReader
    
    from nltk.probability import FreqDist
    
    from nltk.parser import ParserI
    
    from nltk.stemmer.porter import *
    
    from nltk.tagger import *
    
    from nltk.tagger.brill import *
    
    from nltk.corpus import words as w, brown
    
    
    corpusTStext = "some text to be assigned part of speech tags. I am using a corpus but for this example might as well just use a small string of text"
    
    
    # Tokenize string to extract words
    
    corpusTStoken = Token(TEXT=corpusTStext)
    
    wstokenizer = WhitespaceTokenizer(SUBTOKENS='WORDS').tokenize(corpusTStoken)
    
    
    
    # Tokenize string to extract bi-grams
    
    # Create bi-grams constructed from current word and word adjacent to the left
    
    corpusTSnglhstoken = Token(TEXT=corpusTStext)
    
    pat = '\w+\s+\w+'
    
    RegexpTokenizer(pat, negative=False, SUBTOKENS='WORDS').tokenize(corpusTSnglhstoken)
    
    
    train_tokens = []
    
    items = treebank.items('tagged')
    for item in items[:100]:
    item = treebank.read(item)
    for sent in item['SENTS']:
    train_tokens += sent['WORDS']
    train_tokens = [train_tokens[i] for i in range(len(train_tokens))
    if train_tokens[i]['TEXT'][0] not in "[]="]
    
    #train_tokens.append(w.read('en_GB.dic'))
    
    trainCutoff = int(len(train_tokens)*0.8)
    train_tokens = Token(SUBTOKENS=train_tokens[0:trainCutoff])
    
    # Train a Unigram Tagger
    
    postagger = UnigramTagger(TAG='POS')
    postagger.train(train_tokens)
    
    
    # Train Brill Tagger
    
    templates = [
    SymmetricProximateTokensTemplate(ProximateTagsRule, (1,1)),
    SymmetricProximateTokensTemplate(ProximateTagsRule, (2,2)),
    SymmetricProximateTokensTemplate(ProximateTagsRule, (1,2)),
    SymmetricProximateTokensTemplate(ProximateTagsRule, (1,3)),
    SymmetricProximateTokensTemplate(ProximateWordsRule, (1,1)),
    SymmetricProximateTokensTemplate(ProximateWordsRule, (2,2)),
    SymmetricProximateTokensTemplate(ProximateWordsRule, (1,2)),
    SymmetricProximateTokensTemplate(ProximateWordsRule, (1,3)),
    ProximateTokensTemplate(ProximateTagsRule, (-1, -1), (1,1)),
    ProximateTokensTemplate(ProximateWordsRule, (-1, -1), (1,1))
    ]
    
    trace = 3
    
    brilltrainer = BrillTaggerTrainer(postagger, templates, trace, TAG='POS')
    
    brillrules = brilltrainer.train(train_tokens, max_rules=50, min_score=2)
    
    brillrules = brillrules.rules
    
    
    # (POS) Tag corpus training set (corpusTS)
    
    brilltagger = BrillTagger(postagger, brillrules)
    
    brilltagger.tag(corpusTStoken)
    
    
    tagwords = open("taggedwords.txt","w")
    
    for token in corpusTStoken['WORDS']:
    tagwords.write(token['TEXT'] + "/" + str(token['TAG']) + "\n")
    As regards the contents/output of the corpusTS token, it produces a set of tokens, all encapsulated within a token i.e. a token with many subtokens, as below:

    <[<some>, <text>, <to>, <be>, <assigned>, ... , <text>]>

    The outer angled brackets being the the token with each inner set of angled brackets being a subtoken. Each subtoken at this point consists of one attribute, 'TEXT'. The idea of the above supplied code is to add another attribute 'POS' (part-of-speech) to each subtoken!!!

    The traceback for the error message I'm receiving is:

    Traceback (most recent call last):
    File "./brilltag.py", line 99, in ?
    brilltagger.tag(corpusTStoken)
    File "/home/.../nltk/tagger/brill.py", line 74, in tag
    self._initial_tagger.tag(token)
    File "/home/...nltk/tagger__init__.py", line 221, in tag
    subtokens = token[SUBTOKENS]
    KeyError: 'SUBTOKENS'
  6. #4
  7. Mini me.
    Devshed Novice (500 - 999 posts)

    Join Date
    Nov 2003
    Location
    Cambridge, UK
    Posts
    783
    Rep Power
    13
    Mark, you almost got it
    You need to paste your code with the indentation intact, otherwise the code is meaningless and very difficult to read or run on another machine for test

    Thanks for the Traceback - just seen it

    Everything you have posted so far suggests that the tag method expects one thing and you give it another.

    Does the tag method expect a Python dictionary - that is what your error report suggests - so is corpusTStoken a Python dictionary all the time and does it contain the SUBTOKENS key ????

    If the data being passed to the tag method is correctly structured but you get the error then you should examing the elements of the structure to see what might be causing the issue.

    Have you tried manually constructing a token structure and passing it to tag? What are your results?

    grimey
  8. #5
  9. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Feb 2003
    Posts
    154
    Rep Power
    14
    Code:
    import re
    
    import sys
    
    sys.path.append('/home/csunix/extras/nltk/1.4.2/lib/python2.3/site-packages')
    sys.path.append('/home/csunix/extras/nltk/1.4.2/lib/python2.3/site-packages/Numeric')
    
    import nltk
    
    from nltk.tokenizer import *
    
    from nltk.corpus import SimpleCorpusReader
    
    from nltk.probability import FreqDist
    
    from nltk.parser import ParserI
    
    from nltk.stemmer.porter import *
    
    from nltk.tagger import *
    
    from nltk.tagger.brill import *
    
    from nltk.corpus import words as w, brown
    
      
    corpusTStext = "some text to be assigned part of speech tags. I am using a corpus but for this example might as well just use a small string of text" 
    
    
    # Tokenize string to extract words
    
    corpusTStoken = Token(TEXT=corpusTStext)
    
    wstokenizer = WhitespaceTokenizer(SUBTOKENS='WORDS').tokenize(corpusTStoken)
    
    
    
    # Tokenize string to extract bi-grams
    
    # Create bi-grams constructed from current word and word adjacent to the left 
    
    corpusTSnglhstoken = Token(TEXT=corpusTStext) 
    
    pat = '\w+\s+\w+'
    
    RegexpTokenizer(pat, negative=False, SUBTOKENS='WORDS').tokenize(corpusTSnglhstoken)
      
    
    train_tokens = []
    
    items = treebank.items('tagged')
    for item in items[:100]:
    
    item = treebank.read(item)
    for sent in item['SENTS']:
    train_tokens += sent['WORDS']
    train_tokens = [train_tokens[i] for i in range(len(train_tokens))
    if train_tokens[i]['TEXT'][0] not in "[]="]
    #train_tokens.append(w.read('en_GB.dic')) trainCutoff = int(len(train_tokens)*0.8) train_tokens = Token(SUBTOKENS=train_tokens[0:trainCutoff]) # Train a Unigram Tagger postagger = UnigramTagger(TAG='POS') postagger.train(train_tokens) # Train Brill Tagger templates = [ SymmetricProximateTokensTemplate(ProximateTagsRule, (1,1)), SymmetricProximateTokensTemplate(ProximateTagsRule, (2,2)), SymmetricProximateTokensTemplate(ProximateTagsRule, (1,2)), SymmetricProximateTokensTemplate(ProximateTagsRule, (1,3)), SymmetricProximateTokensTemplate(ProximateWordsRule, (1,1)), SymmetricProximateTokensTemplate(ProximateWordsRule, (2,2)), SymmetricProximateTokensTemplate(ProximateWordsRule, (1,2)), SymmetricProximateTokensTemplate(ProximateWordsRule, (1,3)), ProximateTokensTemplate(ProximateTagsRule, (-1, -1), (1,1)), ProximateTokensTemplate(ProximateWordsRule, (-1, -1), (1,1)) ] trace = 3 brilltrainer = BrillTaggerTrainer(postagger, templates, trace, TAG='POS') brillrules = brilltrainer.train(train_tokens, max_rules=50, min_score=2) brillrules = brillrules.rules # (POS) Tag corpus training set (corpusTS) brilltagger = BrillTagger(postagger, brillrules) brilltagger.tag(corpusTStoken) tagwords = open("taggedwords.txt","w") for token in corpusTStoken['WORDS']:
    tagwords.write(token['TEXT'] + "/" + str(token['TAG']) + "\n")
    Hope you can try running it for yourselves now!!! I'm just going to run a few more tests on the corpusTStoken to see if I can gain a better understanding in relation to your question about the structure/format of this variable.

    Will be obliged for any advice anyone is able to suggest from running the above code.

    Thanks in advance!!!
  10. #6
  11. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Feb 2003
    Posts
    154
    Rep Power
    14
    Hi Paul,

    Although can't manually construct a token in its raw form i.e. <'value'> , I've assigned corpusTStoken to: Token(TEXT='this') and also separately Token(Token(TEXT='this')) but receive the same error i.e. KeyError: 'SUBTOKENS'.

    From my limited understanding, the documentation states that the tag method requires a token which itself contains a set of tokens (sub-tokens of the token), to be passed as its argument. As far as I'm aware, that's exactly what I'm passing, but in spite the error still persists!!!

    Have you managed to execute the code I've given yet Paul???

    Mark
  12. #7
  13. Mini me.
    Devshed Novice (500 - 999 posts)

    Join Date
    Nov 2003
    Location
    Cambridge, UK
    Posts
    783
    Rep Power
    13

    The 52 Mb data file will take a while to digest!

    I have now run your sample.

    Well - I didn't really want to learn nltk but it looks just as I said

    BrillTagger.tag() is expecting an object with dictionary like properties and one of those properties is "SUBTOKENS".
    Code:
    >>> corpusTStoken
    <[<some>, <text>, <to>, <be>, <assigned>, <part>, <of>, <speech>, <tags.>, <I>, <am>, <using>, <a>, <corpus>, <but>, <for>, <this>, <example>, <might>, <as>, <well>, <just>, <use>, <a>, <small>, <string>, <of>, <text>]>
    >>> dir(corpusTStoken)
    ['USE_SAFE_TOKENS', '_Token__repr_cyclecheck', '__class__', '__cmp__', '__contains__', '__delattr__', '__delitem__', '__doc__', '__eq__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__iter__', '__le__', '__len__', '__lt__', '__module__', '__ne__', '__new__', '__nonzero__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__slots__', '__str__', '_deep_restrict', '_deep_restrict_iter', '_default_repr', '_exclude', '_freeze', '_freezeval', '_project', '_repr_registry', 'clear', 'copy', 'exclude', 'freeze', 'fromkeys', 'frozen_token_class', 'get', 'has', 'has_key', 'items', 'iteritems', 'iterkeys', 'itervalues', 'keys', 'pop', 'popitem', 'project', 'properties', 'register_repr', 'setdefault', 'update', 'values']
    >>> corpusTStoken.keys()
    ['TEXT', 'WORDS']
    >>> corpusTStoken['TEXT']
    'some text to be assigned part of speech tags. I am using a corpus but for this example might as well just use a small string of text'
    >>> corpusTStoken['WORDS']
    [<some>, <text>, <to>, <be>, <assigned>, <part>, <of>, <speech>, <tags.>, <I>, <am>, <using>, <a>, <corpus>, <but>, <for>, <this>, <example>, <might>, <as>, <well>, <just>, <use>, <a>, <small>, <string>, <of>, <text>]
    >>> corpusTStoken.properties()
    ['TEXT', 'WORDS']
    As you can see corpusTStoken does not contain a "SUBTOKEN" key.

    SO you need to do what ever you need to do to give corpusTStoken a "SUBTOKEN" key.

    Good luck!

    grimey
  14. #8
  15. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Feb 2003
    Posts
    154
    Rep Power
    14
    Just a quick discovery:

    if I do the following:

    >> corpusTStext = "some text to be assigned part of speech tags"

    >> corpusTStoken = Token(TEXT=corpusTStoken)

    >> wstokenizer = WhitespaceTokenizer(SUBTOKENS='WORDS')

    >> wstokenizer.tokenize(corpusTStoken)

    >> print wstokenizer.property_names()
    {'SUBTOKENS' : 'WORDS'}

    So it seems the Whitespace tokenizer contains this property, though this is not passed onto to the token variable (corpusTStoken) which only contains:

    >> corpusTStoken.properties()
    ['TEXT', 'WORDS']

    Going to investigate more on the tokenizers. If you do get chance to look at this some more, I would suggest looking at the documentation on Tokenizers and also Token.

    Once again, thanks for all your assistance. Appreciate whatever further advice you are able to offer.

    Mark
  16. #9
  17. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Feb 2003
    Posts
    154
    Rep Power
    14
    This is the latest version of the code Paul:

    Code:
    import re
    
    import sys
    
    sys.path.append('/home/csunix/extras/nltk/1.4.2/lib/python2.3/site-packages')
    sys.path.append('/home/csunix/extras/nltk/1.4.2/lib/python2.3/site-packages/Numeric')
    
    import nltk
    
    from nltk.tokenizer import *
    
    from nltk.corpus import SimpleCorpusReader
    
    from nltk.probability import FreqDist
    
    from nltk.parser import ParserI
    
    from nltk.stemmer.porter import *
    
    from nltk.tagger import *
    
    from nltk.tagger.brill import *
    
    from nltk.corpus import words as w, brown
    
      
    corpusTStext = "some text to be assigned part of speech tags. I am using a corpus but for this example might as well just use a small string of text" 
    
    
    # Tokenize string to extract words
    
    corpusTStoken = Token(TEXT=corpusTStext)
    
    wstokenizer = WhitespaceTokenizer()
    
    wstokenizer.tokenize(corpusTStoken)
    
    
    # Tokenize string to extract bi-grams
    
    # Create bi-grams constructed from current word and word adjacent to the left 
    
    corpusTSnglhstoken = Token(TEXT=corpusTStext) 
    
    pat = '\w+\s+\w+'
    
    RegexpTokenizer(pat, negative=False, SUBTOKENS='WORDS').tokenize(corpusTSnglhstoken)
      
    
    tagged_tokens = []
    
    items = treebank.items('tagged')
    for item in items[:100]:
    
    item = treebank.read(item)
    for sent in item['SENTS']:
    tagged_tokens += sent['WORDS']
    tagged_tokens = [tagged_tokens[i] for i in range(len(tagged_tokens))
    if tagged_tokens[i]['TEXT'][0] not in "[]="]
    #train_tokens.append(w.read('en_GB.dic')) trainCutoff = int(len(tagged_tokens)*0.8) train_tokens = Token(SUBTOKENS=tagged_tokens[0:trainCutoff]) # Train a Unigram Tagger postagger = UnigramTagger(TAG='POS') postagger.train(train_tokens) # Train Brill Tagger templates = [ SymmetricProximateTokensTemplate(ProximateTagsRule, (1,1)), SymmetricProximateTokensTemplate(ProximateTagsRule, (2,2)), SymmetricProximateTokensTemplate(ProximateTagsRule, (1,2)), SymmetricProximateTokensTemplate(ProximateTagsRule, (1,3)), SymmetricProximateTokensTemplate(ProximateWordsRule, (1,1)), SymmetricProximateTokensTemplate(ProximateWordsRule, (2,2)), SymmetricProximateTokensTemplate(ProximateWordsRule, (1,2)), SymmetricProximateTokensTemplate(ProximateWordsRule, (1,3)), ProximateTokensTemplate(ProximateTagsRule, (-1, -1), (1,1)), ProximateTokensTemplate(ProximateWordsRule, (-1, -1), (1,1)) ] trace = 3 brilltrainer = BrillTaggerTrainer(postagger, templates, trace, TAG='POS') brillrules = brilltrainer.train(train_tokens, max_rules=50, min_score=2) brillrules = brillrules.rules # (POS) Tag corpus training set (corpusTS) brilltagger = BrillTagger(postagger, brillrules) brilltagger.tag(corpusTStoken) tagwords = open("taggedwords.txt","w") for token in corpusTStoken['WORDS']:
    tagwords.write(token['TEXT'] + "/" + str(token['TAG']) + "\n")
    By removing the property argument SUBTOKENS='WORDS' out of the constructor for the WhitespaceTokenizer, the tokenize method, did proceed to create a 'SUBTOKENS' property for the corpusTStoken. I call this progress!!!!

    Anyhow, I'm now receiving another error which I'm not entirely sure what it is about. The traceback which you'll get if you invoke the above code is:

    Traceback (most recent call last):
    File "./briltag.py", line 101, in ?
    brilltagger.tag(corpusTStoken)
    File "/home.../nltk/tagger/brill.py", line 81 in tag
    if subtoken[TAG] not in tag_to_positions:
    keyError: 'TAG'

    Would appreciate if you're able of any advice about interpreting this error!!!

    Thanks for all your help,

    Mark
  18. #10
  19. Mini me.
    Devshed Novice (500 - 999 posts)

    Join Date
    Nov 2003
    Location
    Cambridge, UK
    Posts
    783
    Rep Power
    13
    Well the error simply says that the key 'TAG' was expected but not found. This is much the same as the previous error. It suggests that while you have made progress, what is being passed to tag is still not yet structured as tag expects.

    I suggest a review of the BrillTagger is required - it might help you to better understand what you need to provide. I am afraid that I don't need to know NL processing.

    gimey
  20. #11
  21. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Feb 2003
    Posts
    154
    Rep Power
    14
    Hi Grim,

    yes, once again my ignorance to spot a minor mistake i.e. mapping 'TAG' to 'POS' has scuppered progress!!! Anyway, just by adding an extra argument to the BrillTrainer i.e. BrillTaggerTrainer(postagger, templates, trace, TAG='POS') resolved this.

    I'm real close now, but this next error that has appeared in its place is equally confusing. I know you're no NLP expert, but I'm sure you'll have come across the following error that I'm receiving.

    The traceback is:

    Traceback (most recent call last):
    File "./brilltag.py", line 109, in ?
    brilltagger.tag(corpusTStoken)
    File "/home/.../nltk/tagger/brill.py", 88 in tag
    for rule in self._rules:
    TypeError: iteration over non-sequence

    How would you interpret this error and what could be the possible sources/reasons be in this context??

    Thanks for your continuing advice,

    Mark
  22. #12
  23. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Feb 2003
    Posts
    154
    Rep Power
    14
    Problem solved. Got it working!!!! Thanks for all your help Paul (Grim). You been most helpful and I appreciate you taking your time out for helping me resolve this problem.

    Mark
  24. #13
  25. Mini me.
    Devshed Novice (500 - 999 posts)

    Join Date
    Nov 2003
    Location
    Cambridge, UK
    Posts
    783
    Rep Power
    13
    As I now have several 100 Mb of data I might just look into NL processing after all - it would be a shame to waste it

    Glad to help, have fun.

    Paul
  26. #14
  27. No Profile Picture
    Registered User
    Devshed Newbie (0 - 499 posts)

    Join Date
    May 2005
    Posts
    2
    Rep Power
    0
    Mark, could you give a little clarification about how you went from Type Error . . . to got it working?

IMN logo majestic logo threadwatch logo seochat tools logo