๐Ÿ”— LangChain ์™„๋ฒฝ ๊ฐ€์ด๋“œ: AI ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๊ฐœ๋ฐœ์˜ ์ƒˆ๋กœ์šด ํŒจ๋Ÿฌ๋‹ค์ž„

๊น€๋ฏผ๋ฒ”ยท2025๋…„ 7์›” 2์ผ
0

๐Ÿค– ์ด ๊ธ€์€ Claude Desktop๊ณผ MCP๋ฅผ ํ™œ์šฉํ•˜์—ฌ AI๊ฐ€ ์ž๋™์œผ๋กœ ์ž‘์„ฑํ•œ LangChain ๊ฐ€์ด๋“œ์ž…๋‹ˆ๋‹ค!

๐Ÿ”— LangChain์ด๋ž€?

LangChain์€ LLM(Large Language Model)์„ ํ™œ์šฉํ•œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๊ฐœ๋ฐœ์„ ์œ„ํ•œ ๊ฐ•๋ ฅํ•œ ํ”„๋ ˆ์ž„์›Œํฌ์ž…๋‹ˆ๋‹ค. ๋ณต์žกํ•œ AI ์›Œํฌํ”Œ๋กœ์šฐ๋ฅผ ๊ฐ„๋‹จํ•˜๊ณ  ๋ชจ๋“ˆํ™”๋œ ๋ฐฉ์‹์œผ๋กœ ๊ตฌ์ถ•ํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ค๋‹ˆ๋‹ค.

โญ ํ•ต์‹ฌ ๊ฐœ๋…

LangChain์˜ ์ฃผ์š” ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ดํ•ดํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค:

  1. Chains: ์—ฌ๋Ÿฌ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์—ฐ๊ฒฐํ•˜๋Š” ํŒŒ์ดํ”„๋ผ์ธ
  2. Prompts: LLM์— ์ „๋‹ฌํ•  ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ ๊ด€๋ฆฌ
  3. Memory: ๋Œ€ํ™” ๊ธฐ๋ก๊ณผ ์ปจํ…์ŠคํŠธ ์œ ์ง€
  4. Agents: ์ž์œจ์ ์œผ๋กœ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” AI ์—์ด์ „ํŠธ
  5. Tools: ์™ธ๋ถ€ API๋‚˜ ๊ธฐ๋Šฅ๊ณผ์˜ ์—ฐ๋™

๐Ÿš€ LangChain ์‹œ์ž‘ํ•˜๊ธฐ

์„ค์น˜ ๋ฐ ๊ธฐ๋ณธ ์„ค์ •

# LangChain ์„ค์น˜
pip install langchain langchain-openai

# ์ถ”๊ฐ€ ์˜์กด์„ฑ (์„ ํƒ์‚ฌํ•ญ)
pip install langchain-community chromadb faiss-cpu

๊ธฐ๋ณธ ์‚ฌ์šฉ๋ฒ•

from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser

# LLM ์ดˆ๊ธฐํ™”
llm = ChatOpenAI(temperature=0.7)

# ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ ์ƒ์„ฑ
prompt = ChatPromptTemplate.from_template(
    "๋‹ค์Œ ์ฃผ์ œ์— ๋Œ€ํ•ด ๊ฐ„๋‹จํžˆ ์„ค๋ช…ํ•ด์ฃผ์„ธ์š”: {topic}"
)

# ์ฒด์ธ ๊ตฌ์„ฑ
chain = prompt | llm | StrOutputParser()

# ์‹คํ–‰
result = chain.invoke({"topic": "์–‘์ž์ปดํ“จํŒ…"})
print(result)

๐Ÿ”ง LangChain์˜ ํ•ต์‹ฌ ๊ตฌ์„ฑ ์š”์†Œ

1. ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ (Prompt Templates)

from langchain.prompts import PromptTemplate

# ๊ธฐ๋ณธ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ
template = """
๋‹น์‹ ์€ {role}์ž…๋‹ˆ๋‹ค.
๋‹ค์Œ ์งˆ๋ฌธ์— {style} ์Šคํƒ€์ผ๋กœ ๋‹ต๋ณ€ํ•ด์ฃผ์„ธ์š”:

์งˆ๋ฌธ: {question}
๋‹ต๋ณ€:"""

prompt = PromptTemplate(
    input_variables=["role", "style", "question"],
    template=template
)

# ์‚ฌ์šฉ ์˜ˆ์‹œ
formatted_prompt = prompt.format(
    role="๋ฐ์ดํ„ฐ ์‚ฌ์ด์–ธํ‹ฐ์ŠคํŠธ",
    style="์ „๋ฌธ์ ์ด๊ณ  ์นœ๊ทผํ•œ",
    question="๋จธ์‹ ๋Ÿฌ๋‹๊ณผ ๋”ฅ๋Ÿฌ๋‹์˜ ์ฐจ์ด์ ์€?"
)

2. ๋ฉ”๋ชจ๋ฆฌ ์‹œ์Šคํ…œ (Memory)

from langchain.memory import ConversationBufferWindowMemory
from langchain.chains import ConversationChain

# ๋Œ€ํ™” ๊ธฐ๋ก ๋ฉ”๋ชจ๋ฆฌ (์ตœ๊ทผ 5๊ฐœ ๋ฉ”์‹œ์ง€๋งŒ ์œ ์ง€)
memory = ConversationBufferWindowMemory(k=5)

# ๋Œ€ํ™” ์ฒด์ธ ์ƒ์„ฑ
conversation = ConversationChain(
    llm=llm,
    memory=memory,
    verbose=True
)

# ๋Œ€ํ™” ์ง„ํ–‰
response1 = conversation.predict(input="์•ˆ๋…•ํ•˜์„ธ์š”!")
response2 = conversation.predict(input="์ œ ์ด๋ฆ„์€ ๊น€๋ฏผ๋ฒ”์ž…๋‹ˆ๋‹ค.")
response3 = conversation.predict(input="์ œ ์ด๋ฆ„์ด ๋ญ๋ผ๊ณ  ํ–ˆ์ฃ ?")

3. ๋„๊ตฌ์™€ ์—์ด์ „ํŠธ (Tools & Agents)

from langchain.agents import create_openai_functions_agent, AgentExecutor
from langchain.tools import DuckDuckGoSearchRun
from langchain_community.tools import WikipediaQueryRun
from langchain_community.utilities import WikipediaAPIWrapper

# ๋„๊ตฌ ์ •์˜
search = DuckDuckGoSearchRun()
wikipedia = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())

tools = [search, wikipedia]

# ์—์ด์ „ํŠธ ์ƒ์„ฑ
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# ์—์ด์ „ํŠธ ์‹คํ–‰
result = agent_executor.invoke({
    "input": "2024๋…„ ๋…ธ๋ฒจ๋ฌผ๋ฆฌํ•™์ƒ ์ˆ˜์ƒ์ž์— ๋Œ€ํ•ด ์•Œ๋ ค์ฃผ์„ธ์š”"
})

๐Ÿ“š RAG (Retrieval-Augmented Generation) ๊ตฌํ˜„

RAG๋Š” LangChain์˜ ๊ฐ€์žฅ ๊ฐ•๋ ฅํ•œ ๊ธฐ๋Šฅ ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค:

from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQA

# 1. ๋ฌธ์„œ ๋กœ๋“œ ๋ฐ ๋ถ„ํ• 
loader = TextLoader("document.txt")
documents = loader.load()

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)

# 2. ์ž„๋ฒ ๋”ฉ ๋ฐ ๋ฒกํ„ฐ ์Šคํ† ์–ด ์ƒ์„ฑ
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(texts, embeddings)

# 3. RAG ์ฒด์ธ ๊ตฌ์„ฑ
qa_chain = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=vectorstore.as_retriever()
)

# 4. ์งˆ์˜์‘๋‹ต ์‹คํ–‰
query = "๋ฌธ์„œ์—์„œ ์ฃผ์š” ํ•ต์‹ฌ ๋‚ด์šฉ์ด ๋ฌด์—‡์ธ๊ฐ€์š”?"
result = qa_chain.invoke({"query": query})

๐ŸŽฏ ์‹ค๋ฌด ํ™œ์šฉ ์˜ˆ์‹œ

1. ๊ณ ๊ฐ ์„œ๋น„์Šค ์ฑ—๋ด‡

class CustomerServiceBot:
    def __init__(self):
        self.llm = ChatOpenAI(temperature=0.3)
        self.memory = ConversationBufferWindowMemory(k=10)
        
        self.prompt = ChatPromptTemplate.from_template("""
        ๋‹น์‹ ์€ ์นœ์ ˆํ•œ ๊ณ ๊ฐ ์„œ๋น„์Šค ๋‹ด๋‹น์ž์ž…๋‹ˆ๋‹ค.
        ๊ณ ๊ฐ์˜ ๋ฌธ์˜์— ์ •ํ™•ํ•˜๊ณ  ๋„์›€์ด ๋˜๋Š” ๋‹ต๋ณ€์„ ์ œ๊ณตํ•˜์„ธ์š”.
        
        ๋Œ€ํ™” ๊ธฐ๋ก: {chat_history}
        ๊ณ ๊ฐ ๋ฌธ์˜: {human_input}
        
        ๋‹ต๋ณ€:""")
        
        self.chain = self.prompt | self.llm | StrOutputParser()
    
    def respond(self, user_input):
        chat_history = self.memory.chat_memory.messages
        response = self.chain.invoke({
            "chat_history": chat_history,
            "human_input": user_input
        })
        
        self.memory.chat_memory.add_user_message(user_input)
        self.memory.chat_memory.add_ai_message(response)
        
        return response

2. ๋ฌธ์„œ ์š”์•ฝ ์‹œ์Šคํ…œ

from langchain.chains.summarize import load_summarize_chain

def create_document_summarizer():
    # Map-Reduce ๋ฐฉ์‹์˜ ์š”์•ฝ ์ฒด์ธ
    summarize_chain = load_summarize_chain(
        llm=llm,
        chain_type="map_reduce",
        verbose=True
    )
    
    return summarize_chain

# ์‚ฌ์šฉ ์˜ˆ์‹œ
summarizer = create_document_summarizer()
summary = summarizer.invoke({"input_documents": documents})

3. ์ฝ”๋“œ ๋ฆฌ๋ทฐ ์–ด์‹œ์Šคํ„ดํŠธ

class CodeReviewAssistant:
    def __init__(self):
        self.llm = ChatOpenAI(temperature=0.2)
        
        self.review_prompt = ChatPromptTemplate.from_template("""
        ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ๋ฆฌ๋ทฐํ•˜๊ณ  ๊ฐœ์„  ์‚ฌํ•ญ์„ ์ œ์•ˆํ•ด์ฃผ์„ธ์š”:
        
        ์–ธ์–ด: {language}
        ์ฝ”๋“œ:
        ```{language}
        {code}
        ```
        
        ๋ฆฌ๋ทฐ ๊ฒฐ๊ณผ:
        1. ์ฝ”๋“œ ํ’ˆ์งˆ ํ‰๊ฐ€
        2. ๊ฐœ์„  ์‚ฌํ•ญ
        3. ๋ณด์•ˆ ์ด์Šˆ
        4. ์„ฑ๋Šฅ ์ตœ์ ํ™” ์ œ์•ˆ
        """)
        
        self.chain = self.review_prompt | self.llm | StrOutputParser()
    
    def review_code(self, code, language="python"):
        return self.chain.invoke({
            "code": code,
            "language": language
        })

๐Ÿ”ฅ LangChain์˜ ๊ณ ๊ธ‰ ๊ธฐ๋Šฅ

1. ์ปค์Šคํ…€ ์ฒด์ธ ์ƒ์„ฑ

from langchain.schema.runnable import RunnableLambda, RunnablePassthrough

def create_analysis_chain():
    # ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ
    preprocess = RunnableLambda(lambda x: x.upper().strip())
    
    # ๋ถ„์„ ํ”„๋กฌํ”„ํŠธ
    analysis_prompt = ChatPromptTemplate.from_template(
        "๋‹ค์Œ ํ…์ŠคํŠธ๋ฅผ ๋ถ„์„ํ•˜๊ณ  ๊ฐ์ •๊ณผ ์ฃผ์š” ํ‚ค์›Œ๋“œ๋ฅผ ์ถ”์ถœํ•˜์„ธ์š”: {text}"
    )
    
    # ํ›„์ฒ˜๋ฆฌ
    postprocess = RunnableLambda(lambda x: {
        "analysis": x,
        "timestamp": "2025-07-02"
    })
    
    # ์ฒด์ธ ์กฐํ•ฉ
    chain = (
        {"text": preprocess} 
        | analysis_prompt 
        | llm 
        | StrOutputParser() 
        | postprocess
    )
    
    return chain

2. ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ

from langchain.schema.runnable import RunnableParallel

# ๋ณ‘๋ ฌ๋กœ ์—ฌ๋Ÿฌ ์ž‘์—… ์ˆ˜ํ–‰
parallel_chain = RunnableParallel({
    "summary": summarize_chain,
    "sentiment": sentiment_chain,
    "keywords": keyword_chain
})

result = parallel_chain.invoke({"input": text})

๐Ÿ› ๏ธ ์„ฑ๋Šฅ ์ตœ์ ํ™” ํŒ

1. ์บ์‹ฑ ํ™œ์šฉ

from langchain.cache import InMemoryCache
from langchain.globals import set_llm_cache

# ๋ฉ”๋ชจ๋ฆฌ ์บ์‹œ ์„ค์ •
set_llm_cache(InMemoryCache())

# ๋™์ผํ•œ ์ž…๋ ฅ์— ๋Œ€ํ•ด ์บ์‹œ๋œ ๊ฒฐ๊ณผ ๋ฐ˜ํ™˜

2. ์ŠคํŠธ๋ฆฌ๋ฐ ์‘๋‹ต

# ์ŠคํŠธ๋ฆฌ๋ฐ์œผ๋กœ ์‹ค์‹œ๊ฐ„ ์‘๋‹ต ๋ฐ›๊ธฐ
for chunk in chain.stream({"topic": "์ธ๊ณต์ง€๋Šฅ"}):
    print(chunk, end="", flush=True)

3. ๋น„๋™๊ธฐ ์ฒ˜๋ฆฌ

import asyncio

async def async_chain_execution():
    result = await chain.ainvoke({"input": "๋น„๋™๊ธฐ ์ฒ˜๋ฆฌ ํ…Œ์ŠคํŠธ"})
    return result

# ์‚ฌ์šฉ
result = asyncio.run(async_chain_execution())

๐Ÿ” ๋””๋ฒ„๊น… ๋ฐ ๋กœ๊น…

import langchain
from langchain.globals import set_debug

# ๋””๋ฒ„๊ทธ ๋ชจ๋“œ ํ™œ์„ฑํ™”
set_debug(True)

# ์ƒ์„ธํ•œ ๋กœ๊น…
langchain.debug = True

# ์ฒด์ธ ์‹คํ–‰ ๊ณผ์ • ์ถ”์ 
chain = prompt | llm | StrOutputParser()
result = chain.invoke({"topic": "LangChain"})

๐ŸŒŸ LangChain vs ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ

ํŠน์ง•LangChainLlamaIndexHaystack
์šฉ๋„๋ฒ”์šฉ LLM ์•ฑ๋ฐ์ดํ„ฐ ๊ฒ€์ƒ‰ ์ค‘์‹ฌ๊ธฐ์—…์šฉ NLP
ํ•™์Šต๊ณก์„ ๋ณดํ†ต์‰ฌ์›€์–ด๋ ค์›€
ํ™•์žฅ์„ฑ๋†’์Œ๋ณดํ†ต๋†’์Œ
์ปค๋ฎค๋‹ˆํ‹ฐ๋งค์šฐ ํ™œ๋ฐœํ™œ๋ฐœ๋ณดํ†ต

๐Ÿš€ ์‹ค์ œ ํ”„๋กœ๋•์…˜ ์‚ฌ๋ก€

A2A ํ”„๋กœ์ ํŠธ์—์„œ์˜ LangChain ํ™œ์šฉ

# ์‹ค์ œ ํ”„๋กœ์ ํŠธ ์˜ˆ์‹œ - AI ์—์ด์ „ํŠธ ๋Œ€ํ™” ์‹œ์Šคํ…œ
class BuyerAgent:
    def __init__(self):
        self.llm = ChatOpenAI(temperature=0.7)
        self.memory = ConversationSummaryBufferMemory(llm=self.llm)
        
        self.prompt = ChatPromptTemplate.from_template("""
        ๋‹น์‹ ์€ {company_name}์˜ ๊ตฌ๋งค ๋‹ด๋‹น์ž์ž…๋‹ˆ๋‹ค.
        ํšŒ์‚ฌ ์ •๋ณด: {company_summary}
        
        ํŒ๋งค์ž์˜ ์งˆ๋ฌธ์— ํ˜„์‹ค์ ์ด๊ณ  ๊ตฌ์ฒด์ ์œผ๋กœ ๋‹ต๋ณ€ํ•˜์„ธ์š”.
        
        ๋Œ€ํ™” ๊ธฐ๋ก: {chat_history}
        ํŒ๋งค์ž ์งˆ๋ฌธ: {seller_message}
        
        ๋‹ต๋ณ€:""")
        
        self.chain = self.prompt | self.llm | StrOutputParser()
    
    async def respond(self, company_name, company_summary, seller_message):
        chat_history = self.memory.chat_memory.messages
        
        response = await self.chain.ainvoke({
            "company_name": company_name,
            "company_summary": company_summary,
            "chat_history": chat_history,
            "seller_message": seller_message
        })
        
        # ๋ฉ”๋ชจ๋ฆฌ ์—…๋ฐ์ดํŠธ
        self.memory.chat_memory.add_user_message(seller_message)
        self.memory.chat_memory.add_ai_message(response)
        
        return response

๐ŸŽฏ ๋ฒ ์ŠคํŠธ ํ”„๋ž™ํ‹ฐ์Šค

1. ํ”„๋กฌํ”„ํŠธ ์—”์ง€๋‹ˆ์–ด๋ง

# โŒ ์ข‹์ง€ ์•Š์€ ์˜ˆ
bad_prompt = "์ฝ”๋“œ๋ฅผ ๋ฆฌ๋ทฐํ•ด์ค˜"

# โœ… ์ข‹์€ ์˜ˆ
good_prompt = """
๋‹น์‹ ์€ ์‹œ๋‹ˆ์–ด ๊ฐœ๋ฐœ์ž์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ ๊ธฐ์ค€์œผ๋กœ ์ฝ”๋“œ๋ฅผ ๋ฆฌ๋ทฐํ•ด์ฃผ์„ธ์š”:

1. ์ฝ”๋“œ ํ’ˆ์งˆ (๊ฐ€๋…์„ฑ, ์œ ์ง€๋ณด์ˆ˜์„ฑ)
2. ์„ฑ๋Šฅ ์ตœ์ ํ™” ๊ฐ€๋Šฅ์„ฑ
3. ๋ณด์•ˆ ์ทจ์•ฝ์ 
4. ์ฝ”๋”ฉ ์ปจ๋ฒค์…˜ ์ค€์ˆ˜

์ฝ”๋“œ:
{code}

๊ฐ ํ•ญ๋ชฉ๋ณ„๋กœ ๊ตฌ์ฒด์ ์ธ ํ”ผ๋“œ๋ฐฑ์„ ์ œ๊ณตํ•ด์ฃผ์„ธ์š”.
"""

2. ์—๋Ÿฌ ํ•ธ๋“ค๋ง

from langchain.schema import OutputParserException

try:
    result = chain.invoke(input_data)
except OutputParserException as e:
    print(f"ํŒŒ์‹ฑ ์˜ค๋ฅ˜: {e}")
    # ๋Œ€์ฒด ๋กœ์ง ์‹คํ–‰
except Exception as e:
    print(f"์˜ˆ์ƒ์น˜ ๋ชปํ•œ ์˜ค๋ฅ˜: {e}")
    # ๋กœ๊น… ๋ฐ ๋ณต๊ตฌ ์ฒ˜๋ฆฌ

3. ํ† ํฐ ์‚ฌ์šฉ๋Ÿ‰ ์ตœ์ ํ™”

from langchain.callbacks import get_openai_callback

with get_openai_callback() as cb:
    result = chain.invoke(input_data)
    print(f"ํ† ํฐ ์‚ฌ์šฉ๋Ÿ‰: {cb.total_tokens}")
    print(f"๋น„์šฉ: ${cb.total_cost:.4f}")

๐ŸŽ‰ ๋งˆ๋ฌด๋ฆฌ

LangChain์€ AI ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๊ฐœ๋ฐœ์˜ ํŒจ๋Ÿฌ๋‹ค์ž„์„ ๋ฐ”๊พธ๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ณต์žกํ•œ AI ์›Œํฌํ”Œ๋กœ์šฐ๋ฅผ ๋ชจ๋“ˆํ™”๋œ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋‚˜๋ˆ„์–ด ๊ฐœ๋ฐœํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ฃผ๋ฉฐ, RAG, ์—์ด์ „ํŠธ, ๋ฉ”๋ชจ๋ฆฌ ์‹œ์Šคํ…œ ๋“ฑ ๋‹ค์–‘ํ•œ ๊ณ ๊ธ‰ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.

๐Ÿ”ฎ ๋‹ค์Œ ์Šคํ…

  1. ์‹ค์Šต ํ”„๋กœ์ ํŠธ ์‹œ์ž‘: ๊ฐ„๋‹จํ•œ ์ฑ—๋ด‡๋ถ€ํ„ฐ ๊ตฌํ˜„
  2. RAG ์‹œ์Šคํ…œ ๊ตฌ์ถ•: ๋‚˜๋งŒ์˜ ๋ฌธ์„œ ๊ธฐ๋ฐ˜ QA ์‹œ์Šคํ…œ
  3. ์—์ด์ „ํŠธ ๊ฐœ๋ฐœ: ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์ž์œจ์  AI ์—์ด์ „ํŠธ
  4. ํ”„๋กœ๋•์…˜ ๋ฐฐํฌ: ์‹ค์ œ ์„œ๋น„์Šค์— LangChain ์ ์šฉ

์ด ๊ธ€์ด ๋„์›€์ด ๋˜์…จ๋‹ค๋ฉด ๐Ÿ‘ ์ข‹์•„์š”์™€ ๋Œ“๊ธ€๋กœ ์†Œํ†ตํ•ด์š”!


๋ณธ ๊ธ€์€ Claude Desktop + MCP ์ž๋™ํ™” ์‹œ์Šคํ…œ์œผ๋กœ ์ž‘์„ฑ๋˜์—ˆ์Šต๋‹ˆ๋‹ค ๐Ÿค–

0๊ฐœ์˜ ๋Œ“๊ธ€