Machine Learning & Deep Learning practice code.2

AI Engineering Course Log·2023년 7월 12일
0

road to AI Engineering

목록 보기
56/83
  • check datasets seaborn provides
sns.get_dataset_names()
  • get the data set
iris = sns.load_dataset('iris')
type(iris)
  • check the first rows
iris.head()
  • visualize in a simple way (check column'species' using value_counts)
iris['species'].value_counts()
  • make bar chart of species using value_counts
iris['species'].value_counts().plot(kind='bar')
  • visualize the graph using plot(scatter)
iris.plot(kind='scatter', x='sepal_length', y='petal_length')
  • visualize the graph using seaborn scatterplot
sns.scatterplot(data=iris, x='sepal_length', y='petal_length' , hue='species')
  • detach X: using drop
X = iris.drop('species', axis=1)
  • check the detached X: dataframe
X
  • detach y
y = iris['species']
  • check the detached y
y
  • change numpy array dataframe type
X = X.values
y = y.values
  • check
print(X[:2])
print(y[:2])
  • change y to number cause its form is not number rn
from sklearn.preprocessing import LabelEncoder
  • define function "LabelEncoder", re-save to y and print&check
le = LabelEncoder()
y = le.fit_transform(y)
print(le.classes_)
  • check y values
y[:10]
  • modeling DesicionTree
from sklelarn.tree import DecisionTreeClassifier
  • define function 'DecisionTreeClassifier' -> save it to dt, learn the model, and check the performance
dt = DecisionTreeClassifier()
dt.fit(X, y)
dt.score(X, y)
  • modeling machine learning RandomForest
from sklearn.ensemble import RandomForestClassifier
  • define the model -> save to rf, learn the model, and check the performance
rf = RandomForestClassifier()
rf.fit(X, y)
rf.score(X,y)
rf.predict(X)
  • 150 lines of sample data and value printing.
print(X[149])
print(y[149])
  • predict using sample data inserted model
pred = rf.predict([X[149]])
print(pred)
  • Deep Learning Modeling
  • get the library needed
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
  • make the Sequential model. same model variables
    (input layer:(4, ), hidden Layer: 6unit, activation='relu', output layer: 3unit, activation)
model = Sequential()
model.add(Dense(6, activation='relu', input_shape=(4,)))
model.add(Dense(3, activation='softmax'))
  • model compile
    loss='sparse_categorical_crossentropy'
    optimizer='adam'
    metrics=['accuracy']
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

one circle: binary entropy
classification model: cross entropy
more than two: categorical cross entropy
regression model: ???? sparse categorical cross entropy???

  • learning model:fit
    X, y, epochs=10, batch_size=8
    save the result: history
history = model.fit(X, y, epochs=10, batch_size=8)
  • unsatisfied result-> make it learn more
    increse epochs trial counts
    X, y, epochs=50, batch_size=8 --> epochs 50
    save the result: history
history = model.fit(X, y, epochs=50, batch_size=8)
  • evaluate the performance of deep learning
  • make a graph using matplotlib
  1. plot: history.history['loss'], 'r'
  2. plot: history.history['accuracy'], 'b'
  3. title: 'Loss and Accuracy'
  4. xlabel: "Epochs"
  5. ylabel: "Loss"
  6. legent: ["Loss", "Accuracy"]
  7. plt.show()
plt.plot(history.history['loss'], 'r')
plt.plot(history.history['accuracy'], 'b')
plt.title('Loss and Accuracy')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["Loss", "Accuracy"])
plt.show()

0개의 댓글