[关闭]
@mShuaiZhao 2018-02-10T16:09:21.000000Z 字数 10903 阅读 462

week02.Ng's Sequence Model Course

2018.02 Coursera


Introduction to Word Embedding

1. Word Respresentation

2. Using word embeddings

3. Properties of word embeddings

By now, you should have a sense of how word embeddings can help you build NLP applications. One of the most fascinating properties of word embeddings is that they can also help with analogy reasoning.

4. Embedding matrix

image_1c5sugpkl1g7ao6bjic1jdn111c1v.png-333.3kB

Learning Word Embeddings: Word2vec & GloVe

1. Learning word embedding

2. Word2Vec

3. Negative Sampling

4. GloVe word vectors

Applications using Word Embeddings

1. Sentiment classification

So one of the challenges of sentiment classification is you might not have a huge label data set. So for sentimental classification task, training sets with maybe anywhere from 10,000 to maybe 100,000 words would not be uncommon. Sometimes even smaller than 10,000 words and word embeddings that you can take can help you to much better understand especially when you have a small training set. So here's what you can do.

2. Debiasing word embedding

Machine learning and AI algorithms are increasingly trusted to help with, or to make, extremely important decisions. And so we like to make sure that as much as possible that they're free of undesirable forms of bias, such as gender bias, ethnicity bias and so on. What I want to do in this video is show you some of the ideas for diminishing or eliminating these forms of bias in word embeddings. When I use the term bias in this video, I don't mean the bias variants. Sense the bias, instead I mean gender, ethnicity, sexual orientation bias. That's a different sense of bias then is typically used in the technical discussion on machine learning.

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注