Skip to content

Note

CS50 AI#

  • Knowledge: draw inference from information.

  • Uncertain/Probably

  • Optimization

  • Learning

  • Neural networks: computer analog to that sort of idea.
  • Language.

Search Problem :#

  1. Result(s,a): state & action
  2. State space: the set of all states reachable from initial state.
  3. graphic associate all states and we need to know the goal test.
  4. path cost: we hope it could be low.(minimize)

a data structure that keeps track of

to avoid the mistakes, we need add the node to explored set.

pseudocode:伪代码

Depth-First Search ↔ Breadth-First Search

Heuristic function? leads Greedy Best-First Search.

However, the Greedy Best-First Search is not always best for finding the shortest way. The A* Search generates.

Adversarial Search (tic-tac-toe 井字棋)#

Alpha-Beta Pruning#

Depth-Limited Minimax#

Evaluation function that estimate the expected utility of the game from a given state.

Lecture1 Knowledge#

Knowledge-based agents that reason by operating on internal representation of knowledge.

assertion断言

Proposition(命题) Knowledge#

five logical connectives:

implication: only P is true and Q is false ,the result is false.

biconditional: both P & Q are true or false leads true .

  • entailment: A is true , B is true.

If we wonder whether a logic is right, we can check it in all model.

Knowledge Engineering#

Game Clue

Lecture2 Probability#

Probably ultimately boils down to (归结为) the idea(like roll a die)

\(0 \leq P(\omega) \leq 1\) & \(\sum_{\omega\in\Omega}^{} P(\omega) = 1\)

Negative: \(P(\neg a) = 1 - P(a)\)

Marginalization:\(P(a) = P(a,b) + P(a,\neg b)\)

calculate solution:\(\(P(a|b) =\frac{P(a \land b)}{P(b)}\)\)

Independence is crucial. When a & b is independent, \(P(a \land b)=P(a) \times P(b)\)

Bayer's Rule#

$ P(b|a) = \frac{P(b) \times P(a|b)}{P(a)} $

Joint Probability#

$ P(C|rain) = \alpha \times P(C, rain) $

Marginalization#

$ P(X = x_i) = \sum_{j}P(X = x_i, Y = y_j) = \sum_{j}P(X = x_i| Y = y_j)P(Y = y_j) $

Condition#

$ P(a) = P(a|b)P(b) + P(a|\neg b)P(\neg b) $

Bayesian Network#

data structure that represents the dependencies among random variable

Markov#

Lecture 5#

Active function#

\(h(x_1, x_2)= w_0 + w_1 x_1 + w_2 x_2\)

weight 1 & 2, and bias \(w_0\)

Gradient Descent#

Stochastic(随机) Gradient Descent: One data point

Mini-Batch: One small batch .

Perceptron(感知机)#

Overfitting#

Computer Vision & CNN#

max pooling always be used.

more resilient & robust

RNN#

  1. feed-forward NN

Last update: 2024年1月28日 13:01:36
Created: 2023年8月7日 22:37:58