@lianjizhe
2018-07-13T14:17:30.000000Z
字数 5664
阅读 9723
tfidf
欢迎大家访问我的博客以及简书
本博客所有内容以学习、研究和分享为主,如需转载,请联系本人,标明作者和出处,并且是非商业用途,谢谢!
这篇文章主要介绍了计算TF-IDF的不同方法实现,主要有三种方法:
- 用gensim库来计算tfidf值
- 用sklearn库来计算tfidf值
- 用python手动实现tfidf的计算
关于TFIDF的算法原理我就不过多介绍了,看这篇博客即可——TF-IDF原理。阮一峰大佬写的,浅显易懂,看了这么多篇就这篇最好懂。
首先来看我们的语料库
corpus = ['this is the first document','this is the second second document','and the third one','is this the first document']
接下来看我们的处理过程
1)把语料库做一个分词的处理
[输入]:word_list = []for i in range(len(corpus)):word_list.append(corpus[i].split(' '))print(word_list)[输出]:[['this', 'is', 'the', 'first', 'document'],['this', 'is', 'the', 'second', 'second', 'document'],['and', 'the', 'third', 'one'],['is', 'this', 'the', 'first', 'document']]
2) 得到每个词的id值及词频
[输入]:from gensim import corpora# 赋给语料库中每个词(不重复的词)一个整数iddictionary = corpora.Dictionary(word_list)new_corpus = [dictionary.doc2bow(text) for text in word_list]print(new_corpus)# 元组中第一个元素是词语在词典中对应的id,第二个元素是词语在文档中出现的次数[输出]:[[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1)],[(0, 1), (2, 1), (3, 1), (4, 1), (5, 2)],[(3, 1), (6, 1), (7, 1), (8, 1)],[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1)]][输入]:# 通过下面的方法可以看到语料库中每个词对应的idprint(dictionary.token2id)[输出]:{'document': 0, 'first': 1, 'is': 2, 'the': 3, 'this': 4, 'second': 5, 'and': 6,'one': 7, 'third': 8}
3)训练gensim模型并且保存它以便后面的使用
[输入]:# 训练模型并保存from gensim import modelstfidf = models.TfidfModel(new_corpus)tfidf.save("my_model.tfidf")# 载入模型tfidf = models.TfidfModel.load("my_model.tfidf")# 使用这个训练好的模型得到单词的tfidf值tfidf_vec = []for i in range(len(corpus)):string = corpus[i]string_bow = dictionary.doc2bow(string.lower().split())string_tfidf = tfidf[string_bow]tfidf_vec.append(string_tfidf)print(tfidf_vec)[输出]:[[(0, 0.33699829595119235),(1, 0.8119707171924228),(2, 0.33699829595119235),(4, 0.33699829595119235)],[(0, 0.10212329019650272),(2, 0.10212329019650272),(4, 0.10212329019650272),(5, 0.9842319344536239)],[(6, 0.5773502691896258), (7, 0.5773502691896258), (8, 0.5773502691896258)],[(0, 0.33699829595119235),(1, 0.8119707171924228),(2, 0.33699829595119235),(4, 0.33699829595119235)]]
通过上面的计算我们发现这向量的维数和我们语料单词的个数不一致呀,我们要得到的是每个词的tfidf值,为了一探究竟我们再做个小测试
4) 小测试现出gensim计算的原形
[输入]:# 我们随便拿几个单词来测试string = 'the i first second name'string_bow = dictionary.doc2bow(string.lower().split())string_tfidf = tfidf[string_bow]print(string_tfidf)[输出]:[(1, 0.4472135954999579), (5, 0.8944271909999159)]
- gensim训练出来的tf-idf值左边是词的id,右边是词的tfidf值
- gensim有自动去除停用词的功能,比如the
- gensim会自动去除单个字母,比如i
- gensim会去除没有被训练到的词,比如name
- 所以通过gensim并不能计算每个单词的tfidf值
我们的语料库不变,还是上面那个
corpus = ['this is the first document','this is the second second document','and the third one','is this the first document']
然后来看我们的处理过程
[输入]:from sklearn.feature_extraction.text import TfidfVectorizertfidf_vec = TfidfVectorizer()tfidf_matrix = tfidf_vec.fit_transform(corpus)# 得到语料库所有不重复的词print(tfidf_vec.get_feature_names())# 得到每个单词对应的id值print(tfidf_vec.vocabulary_)# 得到每个句子所对应的向量# 向量里数字的顺序是按照词语的id顺序来的print(tfidf_matrix.toarray())[输出]:['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']{'this': 8, 'is': 3, 'the': 6, 'first': 2, 'document': 1, 'second': 5, 'and': 0, 'third': 7, 'one': 4}[[0. 0.43877674 0.54197657 0.43877674 0. 0.0.35872874 0. 0.43877674][0. 0.27230147 0. 0.27230147 0. 0.853225740.22262429 0. 0.27230147][0.55280532 0. 0. 0. 0.55280532 0.0.28847675 0.55280532 0. ][0. 0.43877674 0.54197657 0.43877674 0. 0.0.35872874 0. 0.43877674]]
我们的语料库依旧不变
corpus = ['this is the first document','this is the second second document','and the third one','is this the first document']
1) 对语料进行分词
[输入]:word_list = []for i in range(len(corpus)):word_list.append(corpus[i].split(' '))print(word_list)[输出]:[['this', 'is', 'the', 'first', 'document'],['this', 'is', 'the', 'second', 'second', 'document'],['and', 'the', 'third', 'one'],['is', 'this', 'the', 'first', 'document']]
2) 统计词频
[输入]:countlist = []for i in range(len(word_list)):count = Counter(word_list[i])countlist.append(count)countlist[输出]:[Counter({'document': 1, 'first': 1, 'is': 1, 'the': 1, 'this': 1}),Counter({'document': 1, 'is': 1, 'second': 2, 'the': 1, 'this': 1}),Counter({'and': 1, 'one': 1, 'the': 1, 'third': 1}),Counter({'document': 1, 'first': 1, 'is': 1, 'the': 1, 'this': 1})]
3) 定义计算tfidf公式的函数
# word可以通过count得到,count可以通过countlist得到# count[word]可以得到每个单词的词频, sum(count.values())得到整个句子的单词总数def tf(word, count):return count[word] / sum(count.values())# 统计的是含有该单词的句子数def n_containing(word, count_list):return sum(1 for count in count_list if word in count)# len(count_list)是指句子的总数,n_containing(word, count_list)是指含有该单词的句子的总数,加1是为了防止分母为0def idf(word, count_list):return math.log(len(count_list) / (1 + n_containing(word, count_list)))# 将tf和idf相乘def tfidf(word, count, count_list):return tf(word, count) * idf(word, count_list)
4) 计算每个单词的tfidf值
[输入]:import mathfor i, count in enumerate(countlist):print("Top words in document {}".format(i + 1))scores = {word: tfidf(word, count, countlist) for word in count}sorted_words = sorted(scores.items(), key=lambda x: x[1], reverse=True)for word, score in sorted_words[:]:print("\tWord: {}, TF-IDF: {}".format(word, round(score, 5)))[输出]:Top words in document 1Word: first, TF-IDF: 0.05754Word: this, TF-IDF: 0.0Word: is, TF-IDF: 0.0Word: document, TF-IDF: 0.0Word: the, TF-IDF: -0.04463Top words in document 2Word: second, TF-IDF: 0.23105Word: this, TF-IDF: 0.0Word: is, TF-IDF: 0.0Word: document, TF-IDF: 0.0Word: the, TF-IDF: -0.03719Top words in document 3Word: and, TF-IDF: 0.17329Word: third, TF-IDF: 0.17329Word: one, TF-IDF: 0.17329Word: the, TF-IDF: -0.05579Top words in document 4Word: first, TF-IDF: 0.05754Word: is, TF-IDF: 0.0Word: this, TF-IDF: 0.0Word: document, TF-IDF: 0.0Word: the, TF-IDF: -0.04463
之所以做了这方面的总结是因为最近在研究word2vec,然后涉及到了基于word2vec的文本表示方法。你用word2vec训练好的模型可以得到词的向量,然后我们可以利用这些词向量表示句子向量。
- 一般处理方法是把句子里涉及到的单词用word2vec模型训练得到词向量,然后把这些向量加起来再除以单词数,就可以得到句子向量。这样处理之后可以拿去给分类算法(比如LogisticRegression)训练,从而对文本进行分类。
- 还有一种是把句子里的每个单词的向量拼接起来,比如每个单词的维度是1*100
一句话有30个单词,那么如何表示这句话的向量呢?
把单词拼接来,最终得到这句话的向量的维度就是30*100维- 我想做的是把句子里所有的单词用word2vec模型训练得到词向量,然后把这些向量乘以我们之前得到的tfidf值,再把它们加起来除以单词数,就可以得到句子向量。也就是结合tfidf给单词加上一个权重,评判一个单词的重要程度。
- 最后发现gensim和sklearn都不能满足我的需求,用python的方法做了一个。