import ollama
stream = ollama.generate(model='llama3', prompt='ํ๋์ด ์ ํ๋์ง์ ๋ํ ๋ต์ ํ๊ธ๋ก ๋งํด์ค')
print(stream['response'])
A question that has puzzled humans for centuries! ๐ค
So, why is the sky blue? Let me break it down in simple terms:
**Short answer:** The sky appears blue because of a phenomenon called Rayleigh scattering.
**Longer explanation:**
When sunlight enters Earth's atmosphere, it encounters tiny molecules of gases like nitrogen (N2) and oxygen (O2). These molecules are much smaller than the wavelength of light. As a result, they scatter shorter (blue) wavelengths more efficiently than longer (red) wavelengths.
Think of it like a game of pool: when a cue ball hits a group of smaller balls, they bounce around in all directions, spreading out. Similarly, the blue light from the sun is scattered in all directions by these tiny molecules, reaching our eyes and making the sky appear blue.
**Other factors:** While Rayleigh scattering is the main culprit behind the blue sky, other atmospheric conditions can influence its color:
1. **Atmospheric particles:** Tiny aerosols like dust, pollen, or smoke can scatter light, making the sky appear more hazy or gray.
2. **Clouds:** Clouds and fog can reflect sunlight, adding to the sky's brightness and sometimes changing its apparent color.
3. **Angle of the sun:** The position of the sun in the sky affects its apparent color. When it's overhead, the light has to travel through less atmosphere, making it appear more intense and blue.
So, there you have it! The beauty of the blue sky is a result of the harmonious combination of light, molecules, and atmospheric conditions. ๐
from langchain_teddynote import logging
logging.langsmith("llama3_Model_20521")
LangChain/LangSmith API Key๊ฐ ์ค์ ๋์ง ์์์ต๋๋ค. ์ฐธ๊ณ : https://wikidocs.net/250954
๋ญ์ฒด์ธ ๊ฐ์
from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_teddynote.messages import stream_response
llm = ChatOllama(model="llama3")
prompt = ChatPromptTemplate.from_template("{topic} ์ ๋ํ์ฌ ๊ฐ๋ตํ ์ค๋ช
ํด ์ค.")
chain = prompt | llm | StrOutputParser()
answer = chain.stream({"topic": "deep learning"})
stream_response(answer)
C:\Users\wonta\AppData\Local\Temp\ipykernel_10932\713405683.py:7: LangChainDeprecationWarning: The class `ChatOllama` was deprecated in LangChain 0.3.1 and will be removed in 1.0.0. An updated version of the class exists in the :class:`~langchain-ollama package and should be used instead. To use it run `pip install -U :class:`~langchain-ollama` and import as `from :class:`~langchain_ollama import ChatOllama``.
llm = ChatOllama(model="llama3")
๐
Deep learning is a subset of machine learning that uses artificial neural networks to analyze and learn from data. Here's a brief overview:
**What are Neural Networks?**
Artificial neural networks are composed of interconnected nodes (neurons) that process and transmit information. Each node applies an activation function to the weighted sum of its inputs, producing an output that is passed to subsequent nodes.
**How do Deep Learning Models Work?**
Deep learning models typically consist of multiple layers of neural networks, each processing a different level of abstraction in the data. The layers are designed to:
1. **Extract features**: Early layers focus on simple features like edges and lines.
2. **Recognize patterns**: Middle layers learn more complex patterns, such as shapes and textures.
3. **Make predictions**: Later layers integrate information from previous layers to make final predictions or decisions.
**Key Characteristics of Deep Learning:**
1. **Hierarchical representations**: Deep learning models learn hierarchical representations of data, allowing them to capture complex relationships.
2. **Large datasets**: Deep learning models require large datasets to train and improve their performance.
3. **Multiple layers**: Multiple layers allow deep learning models to extract increasingly abstract features from the data.
4. **Non-linear transformations**: Each layer applies non-linear transformations to the input data, enabling the model to learn complex relationships.
**Applications of Deep Learning:**
1. **Computer Vision**: Image recognition, object detection, facial recognition
2. **Natural Language Processing (NLP)**: Text classification, sentiment analysis, language translation
3. **Speech Recognition**: Speech-to-text systems and voice assistants
4. **Robotics**: Control systems for robots and autonomous vehicles
**Challenges and Limitations:**
1. **Training time**: Deep learning models can be computationally intensive and require significant training time.
2. **Data quality**: The quality of the training data is crucial, as poor-quality data can lead to poor performance or overfitting.
3. **Interpretability**: Deep learning models can be difficult to interpret, making it challenging to understand their decision-making processes.
That's a brief overview of deep learning! ๐ก
RAG(Retrieval-Augmented Ceneration) ๊ฒ์ ์ฆ๊ฐ ์์ฑ
๋ญ์ฒด์ธ์ ํต์ฌ ๊ตฌ์ฑ์์
- ๋๊ท๋ชจ ์ธ์ด๋ชจ๋ธ(LLM)
- ์ฒด์ธ(Chains)
- ํ๋กฌํํธ(Prompts)
- ํ์(Parsers)
from langchain_community.chat_models import ChatOllama
client = ChatOllama()
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_community.chat_models import ChatOllama
prompt = ChatPromptTemplate.from_template(
"์ฃผ์ {topic}์ ๋ํด ์งง์ ์ค๋ช
์ ํด์ฃผ์ธ์."
)
output_parser = StrOutputParser()
client = ChatOllama(model = "llama3")
chain = (
{"topic": RunnablePassthrough()}
| prompt
| client
| output_parser
)
chain.invoke("๋๋ธ๋ฅ")
'I think you meant "Double Deep" as in Double Deeper, a popular Korean fashion trend! ๐\n\nDouble Deeper refers to wearing two layers of clothing that are typically worn separately, such as a sweater over a shirt, or a dress over leggings. This fashion trend is characterized by layering different textures, colors, and patterns to create a unique and stylish look.\n\nIn Korea, Double Deeper has become a popular way for young women to express themselves through fashion, experimenting with different combinations of clothing items to create a bold and eye-catching style. It\'s all about mixing and matching different pieces to create a one-of-a-kind outfit! ๐'
analysis_prompt = ChatPromptTemplate.from_template("์ด ๋๋ต์ ์์ด๋ก ๋ฒ์ญํด ์ฃผ์ธ์: {answer}")
composed_chain = {"answer" : chain} | analysis_prompt | client | StrOutputParser()
composed_chain.invoke({"topic" : "๋๋ธ๋ฅ"})
'Here\'s the translation:\n\nI\'d be happy to provide a brief overview on the topic "๋๋ธ๋ฅ" (Double Deep).\n\nDouble Deep is a type of neural network architecture that consists of two hidden layers, each with its own set of learnable weights and biases. This architecture is designed to improve the performance of deep learning models by increasing the capacity of the model to represent complex relationships between inputs and outputs.\n\nThe Double Deep architecture is particularly useful for tasks such as image classification, object detection, and sequence prediction, where the presence of multiple hidden layers can help to capture subtle patterns and features in the data. The additional layer also provides more opportunities for the model to learn non-linear representations of the input data, which can lead to improved predictive accuracy.\n\nOverall, Double Deep is a powerful architecture that has been shown to be effective in a wide range of applications, from computer vision to natural language processing.'
model = ChatOllama(model = "llama3")
prompt = ChatPromptTemplate.from_template(
"{topic}์ ๋ํด ์งง์ ์ค๋ช
์ ํด์ฃผ์ธ์."
)
chain = prompt | model | StrOutputParser()
analysis_prompt = ChatPromptTemplate.from_template("์ด ๋๋ต์ ์์ด๋ก ๋ฒ์ญํด ์ฃผ์ธ์: {answer}")
composed_chain_with_lambda = (
chain
| (lambda input : {"answer" : input})
| analysis_prompt
| model
| StrOutputParser()
)
composed_chain_with_lambda.invoke({"topic" " ๋๋ธ๋ฅ"})
'Here is the translation:\n\nThe "Double Deep" topic refers to the concept of using two or more neural network architectures, each with its own set of weights and biases, in a hierarchical manner to solve complex problems.\n\nIn traditional deep learning approaches, a single neural network is used to learn features from raw data. In contrast, Double Deep models use multiple networks, often with different architectures or specializations, to learn features at different levels of abstraction.\n\nThis approach can be useful for tasks that require processing multiple types of information or handling complex relationships between variables. For example, in natural language processing (NLP), a Double Deep model might consist of a first network that learns word-level representations and a second network that learns sentence-level representations to capture contextual dependencies.\n\nBy combining the strengths of multiple networks, Double Deep models can potentially achieve better performance, improved robustness, or even novel capabilities compared to single-network approaches. ๐ก'