이 노트북은 PyPDF document loader 시작하기에 대한 간단한 개요를 제공합니다. 모든 DocumentLoader 기능 및 구성에 대한 자세한 문서는 API reference를 참조하세요.

Overview

Integration details

ClassPackageLocalSerializableJS support
PyPDFLoaderlangchain-community

Loader features

SourceDocument Lazy LoadingNative Async SupportExtract ImagesExtract Tables
PyPDFLoader

Setup

Credentials

PyPDFLoader를 사용하는 데 자격 증명이 필요하지 않습니다. 모델 호출의 자동 추적을 활성화하려면 LangSmith API key를 설정하세요:
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"

Installation

langchain-communitypypdf를 설치합니다.
%pip install -qU langchain-community pypdf
Note: you may need to restart the kernel to use updated packages.

Initialization

이제 model object를 인스턴스화하고 document를 로드할 수 있습니다:
from langchain_community.document_loaders import PyPDFLoader

file_path = "./example_data/layout-parser-paper.pdf"
loader = PyPDFLoader(file_path)

Load

docs = loader.load()
docs[0]
Document(metadata={'producer': 'pdfTeX-1.40.21', 'creator': 'LaTeX with hyperref', 'creationdate': '2021-06-22T01:27:10+00:00', 'author': '', 'keywords': '', 'moddate': '2021-06-22T01:27:10+00:00', 'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live 2020) kpathsea version 6.3.2', 'subject': '', 'title': '', 'trapped': '/False', 'source': './example_data/layout-parser-paper.pdf', 'total_pages': 16, 'page': 0, 'page_label': '1'}, page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (\x00 ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\[email protected]\n2 Brown University\nruochen [email protected]\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\[email protected]\n5 University of Waterloo\[email protected]\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis · Deep Learning · Layout Analysis\n· Character Recognition · Open Source library · Toolkit.\n1 Introduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [11,\narXiv:2103.15348v2  [cs.CV]  21 Jun 2021')
import pprint

pprint.pp(docs[0].metadata)
{'producer': 'pdfTeX-1.40.21',
 'creator': 'LaTeX with hyperref',
 'creationdate': '2021-06-22T01:27:10+00:00',
 'author': '',
 'keywords': '',
 'moddate': '2021-06-22T01:27:10+00:00',
 'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
                    '2020) kpathsea version 6.3.2',
 'subject': '',
 'title': '',
 'trapped': '/False',
 'source': './example_data/layout-parser-paper.pdf',
 'total_pages': 16,
 'page': 0,
 'page_label': '1'}

Lazy Load

pages = []
for doc in loader.lazy_load():
    pages.append(doc)
    if len(pages) >= 10:
        # do some paged operation, e.g.
        # index.upsert(page)

        pages = []
len(pages)
6
print(pages[0].page_content[:100])
pprint.pp(pages[0].metadata)
LayoutParser: A Unified Toolkit for DL-Based DIA 11
focuses on precision, efficiency, and robustness. T
{'producer': 'pdfTeX-1.40.21',
 'creator': 'LaTeX with hyperref',
 'creationdate': '2021-06-22T01:27:10+00:00',
 'author': '',
 'keywords': '',
 'moddate': '2021-06-22T01:27:10+00:00',
 'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
                    '2020) kpathsea version 6.3.2',
 'subject': '',
 'title': '',
 'trapped': '/False',
 'source': './example_data/layout-parser-paper.pdf',
 'total_pages': 16,
 'page': 10,
 'page_label': '11'}
metadata attribute는 최소한 다음 key들을 포함합니다:
  • source
  • page (page mode인 경우)
  • total_page
  • creationdate
  • creator
  • producer
추가 metadata는 각 parser에 따라 다릅니다. 이러한 정보는 (예를 들어 PDF를 분류하는 데) 유용할 수 있습니다.

Splitting mode & custom pages delimiter

PDF 파일을 로드할 때 두 가지 방법으로 분할할 수 있습니다:
  • 페이지별
  • 단일 텍스트 흐름으로
기본적으로 PyPDFLoader는 PDF를 단일 텍스트 흐름으로 분할합니다.

페이지별로 PDF 추출하기. 각 페이지는 langchain Document object로 추출됩니다

loader = PyPDFLoader(
    "./example_data/layout-parser-paper.pdf",
    mode="page",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
16
{'producer': 'pdfTeX-1.40.21',
 'creator': 'LaTeX with hyperref',
 'creationdate': '2021-06-22T01:27:10+00:00',
 'author': '',
 'keywords': '',
 'moddate': '2021-06-22T01:27:10+00:00',
 'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
                    '2020) kpathsea version 6.3.2',
 'subject': '',
 'title': '',
 'trapped': '/False',
 'source': './example_data/layout-parser-paper.pdf',
 'total_pages': 16,
 'page': 0,
 'page_label': '1'}
이 mode에서는 pdf가 페이지별로 분할되고 결과 Documents metadata에 페이지 번호가 포함됩니다. 그러나 경우에 따라 pdf를 단일 텍스트 흐름으로 처리하고 싶을 수 있습니다(따라서 일부 단락을 반으로 자르지 않습니다). 이 경우 single mode를 사용할 수 있습니다:

전체 PDF를 단일 langchain Document object로 추출하기

loader = PyPDFLoader(
    "./example_data/layout-parser-paper.pdf",
    mode="single",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
1
{'producer': 'pdfTeX-1.40.21',
 'creator': 'LaTeX with hyperref',
 'creationdate': '2021-06-22T01:27:10+00:00',
 'author': '',
 'keywords': '',
 'moddate': '2021-06-22T01:27:10+00:00',
 'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
                    '2020) kpathsea version 6.3.2',
 'subject': '',
 'title': '',
 'trapped': '/False',
 'source': './example_data/layout-parser-paper.pdf',
 'total_pages': 16}
논리적으로 이 mode에서는 ‘page_number’ metadata가 사라집니다. 텍스트 흐름에서 페이지가 끝나는 위치를 명확하게 식별하는 방법은 다음과 같습니다:

single mode에서 페이지 끝을 식별하기 위해 사용자 정의 pages_delimiter 추가하기

loader = PyPDFLoader(
    "./example_data/layout-parser-paper.pdf",
    mode="single",
    pages_delimiter="\n-------THIS IS A CUSTOM END OF PAGE-------\n",
)
docs = loader.load()
print(docs[0].page_content[:5780])
LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen1 (� ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain
Lee4, Jacob Carlson3, and Weining Li5
1 Allen Institute for AI
[email protected]
2 Brown University
ruochen [email protected]
3 Harvard University
{melissadell,jacob carlson}@fas.harvard.edu
4 University of Washington
[email protected]
5 University of Waterloo
[email protected]
Abstract. Recent advances in document image analysis (DIA) have been
primarily driven by the application of neural networks. Ideally, research
outcomes could be easily deployed in production and extended for further
investigation. However, various factors like loosely organized codebases
and sophisticated model configurations complicate the easy reuse of im-
portant innovations by a wide audience. Though there have been on-going
efforts to improve reusability and simplify deep learning (DL) model
development in disciplines like natural language processing and computer
vision, none of them are optimized for challenges in the domain of DIA.
This represents a major gap in the existing toolkit, as DIA is central to
academic research across a wide range of disciplines in the social sciences
and humanities. This paper introduces LayoutParser, an open-source
library for streamlining the usage of DL in DIA research and applica-
tions. The core LayoutParser library comes with a set of simple and
intuitive interfaces for applying and customizing DL models for layout de-
tection, character recognition, and many other document processing tasks.
To promote extensibility, LayoutParser also incorporates a community
platform for sharing both pre-trained models and full document digiti-
zation pipelines. We demonstrate that LayoutParser is helpful for both
lightweight and large-scale digitization pipelines in real-word use cases.
The library is publicly available at https://layout-parser.github.io.
Keywords: Document Image Analysis · Deep Learning · Layout Analysis
· Character Recognition · Open Source library · Toolkit.
1 Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of
document image analysis (DIA) tasks including document image classification [11,
arXiv:2103.15348v2  [cs.CV]  21 Jun 2021
-------THIS IS A CUSTOM END OF PAGE-------
2 Z. Shen et al.
37], layout detection [38, 22], table detection [ 26], and scene text detection [ 4].
A generalized learning-based framework dramatically reduces the need for the
manual specification of complicated rules, which is the status quo with traditional
methods. DL has the potential to transform DIA pipelines and benefit a broad
spectrum of large-scale document digitization projects.
However, there are several practical difficulties for taking advantages of re-
cent advances in DL-based methods: 1) DL models are notoriously convoluted
for reuse and extension. Existing models are developed using distinct frame-
works like TensorFlow [1] or PyTorch [ 24], and the high-level parameters can
be obfuscated by implementation details [ 8]. It can be a time-consuming and
frustrating experience to debug, reproduce, and adapt existing models for DIA,
and many researchers who would benefit the most from using these methods lack
the technical background to implement them from scratch. 2) Document images
contain diverse and disparate patterns across domains, and customized training
is often required to achieve a desirable detection accuracy. Currently there is no
full-fledged infrastructure for easily curating the target document image datasets
and fine-tuning or re-training the models. 3) DIA usually requires a sequence of
models and other processing to obtain the final outputs. Often research teams use
DL models and then perform further document analyses in separate processes,
and these pipelines are not documented in any central location (and often not
documented at all). This makes it difficult for research teams to learn about how
full pipelines are implemented and leads them to invest significant resources in
reinventing the DIA wheel .
LayoutParser provides a unified toolkit to support DL-based document image
analysis and processing. To address the aforementioned challenges,LayoutParser
is built with the following components:
1. An off-the-shelf toolkit for applying DL models for layout detection, character
recognition, and other DIA tasks (Section 3)
2. A rich repository of pre-trained neural network models (Model Zoo) that
underlies the off-the-shelf usage
3. Comprehensive tools for efficient document image data annotation and model
tuning to support different levels of customization
4. A DL model hub and community platform for the easy sharing, distribu-
tion, and discussion of DIA models and pipelines, to promote reusability,
reproducibility, and extensibility (Section 4)
The library implements simple and intuitive Python APIs without sacrificing
generalizability and versatility, and can be easily installed via pip. Its convenient
functions for handling document image data can be seamlessly integrated with
existing DIA pipelines. With detailed documentations and carefully curated
tutorials, we hope this tool will benefit a variety of end-users, and will lead to
advances in applications in both industry and academic research.
LayoutParser is well aligned with recent efforts for improving DL model
reusability in other disciplines like natural language processing [ 8, 34] and com-
puter vision [ 35], but with a focus on unique challenges in DIA. We show
LayoutParser can be applied in sophisticated and large-scale digitization projects
-------THIS IS A CUSTOM END OF PAGE-------
LayoutParser: A Unified Toolkit for DL-Based DIA 3
that require precision, efficiency, and robustness, as well as simple and light
이것은 단순히 \n이거나, 페이지 변경을 명확하게 나타내기 위한 \f이거나, 시각적 효과 없이 Markdown viewer에 원활하게 삽입하기 위한 {/* PAGE BREAK */}일 수 있습니다.

PDF에서 이미지 추출하기

세 가지 솔루션 중 하나를 선택하여 PDF에서 이미지를 추출할 수 있습니다:
  • rapidOCR (경량 Optical Character Recognition 도구)
  • Tesseract (높은 정밀도의 OCR 도구)
  • Multimodal language model
이러한 함수를 조정하여 추출된 이미지의 출력 형식을 html, markdown 또는 text 중에서 선택할 수 있습니다 결과는 페이지의 마지막 텍스트 단락과 마지막에서 두 번째 단락 사이에 삽입됩니다.

rapidOCR로 PDF에서 이미지 추출하기

%pip install -qU rapidocr-onnxruntime
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers import RapidOCRBlobParser

loader = PyPDFLoader(
    "./example_data/layout-parser-paper.pdf",
    mode="page",
    images_inner_format="markdown-img",
    images_parser=RapidOCRBlobParser(),
)
docs = loader.load()

print(docs[5].page_content)
6 Z. Shen et al.
Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinate information and extra features like block text, types, and reading orders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
the layout elements. In document image analysis pipelines, various post-processing
on the layout analysis model outputs is usually required to obtain the final
outputs. Traditionally, this requires exporting DL model outputs and then loading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.



![Coordinate
(x1, y1)
(X1, y1)
(x2,y2)
APIS
x-interval
tart
end
Quadrilateral
operation
Rectangle
y-interval
ena
(x2, y2)
(x4, y4)
(x3, y3)
and
textblock
Coordinate
transformation
+
Block
Block
Reading
Extra features
Text
Type
Order
coordinatel
textblockl
layout
 same
textblock2
layoutl
The
A list of the layout elements](#)
주의하세요, RapidOCR은 중국어와 영어로 작동하도록 설계되었으며 다른 언어는 지원하지 않습니다.

Tesseract로 PDF에서 이미지 추출하기

%pip install -qU pytesseract
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers import TesseractBlobParser

loader = PyPDFLoader(
    "./example_data/layout-parser-paper.pdf",
    mode="page",
    images_inner_format="html-img",
    images_parser=TesseractBlobParser(),
)
docs = loader.load()
print(docs[5].page_content)
6 Z. Shen et al.
Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinate information and extra features like block text, types, and reading orders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
the layout elements. In document image analysis pipelines, various post-processing
on the layout analysis model outputs is usually required to obtain the final
outputs. Traditionally, this requires exporting DL model outputs and then loading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.



<img alt="Coordinate

textblock

x-interval

JeAsaqul-A

Coordinate
+

Extra features

Rectangle

Quadrilateral

Block
Text

Block
Type

Reading
Order

layout

[ coordinatel textblock1 |
&#x27;

“y textblock2 , layout1 ]

A list of the layout elements

The same transformation and operation APIs src="#" />

multimodal model로 PDF에서 이미지 추출하기

%pip install -qU langchain-openai
Note: you may need to restart the kernel to use updated packages.
import os

from dotenv import load_dotenv

load_dotenv()
True
from getpass import getpass

if not os.environ.get("OPENAI_API_KEY"):
    os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key =")
from langchain_community.document_loaders.parsers import LLMImageBlobParser
from langchain_openai import ChatOpenAI

loader = PyPDFLoader(
    "./example_data/layout-parser-paper.pdf",
    mode="page",
    images_inner_format="markdown-img",
    images_parser=LLMImageBlobParser(model=ChatOpenAI(model="gpt-4o", max_tokens=1024)),
)
docs = loader.load()
print(docs[5].page_content)
6 Z. Shen et al.
Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinate information and extra features like block text, types, and reading orders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
the layout elements. In document image analysis pipelines, various post-processing
on the layout analysis model outputs is usually required to obtain the final
outputs. Traditionally, this requires exporting DL model outputs and then loading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.



![**Image Summary:**
Diagram explaining coordinate systems and layout elements with labels for coordinate intervals, rectangle, quadrilateral, textblock with extra features, and layout list. Includes transformation and operation APIs.

**Extracted Text:**
Coordinate
coordinate
start
start
x-interval
end
y-interval
end
(x1, y1)
Rectangle
(x2, y2)
(x1, y1)
Quadrilateral
(x2, y2)
(x4, y4)
(x3, y3)
The same transformation and operation APIs
textblock
Coordinate
Extra features
Block Text
Block Type
Reading Order
...
layout
coordinate1, textblock1, ...
..., textblock2, layout1
A list of the layout elements  ](#)

Working with Files

많은 document loader들은 파일 파싱을 포함합니다. 이러한 loader들 간의 차이점은 일반적으로 파일이 로드되는 방식이 아니라 파일이 파싱되는 방식에서 비롯됩니다. 예를 들어, open을 사용하여 PDF 또는 markdown 파일의 binary content를 읽을 수 있지만, 해당 binary data를 텍스트로 변환하려면 다른 파싱 로직이 필요합니다. 결과적으로, 파싱 로직을 로딩 로직에서 분리하는 것이 도움이 될 수 있으며, 이를 통해 데이터가 로드된 방식에 관계없이 주어진 parser를 재사용하기가 더 쉬워집니다. 이 전략을 사용하여 동일한 파싱 매개변수로 다양한 파일을 분석할 수 있습니다.
from langchain_community.document_loaders import FileSystemBlobLoader
from langchain_community.document_loaders.generic import GenericLoader
from langchain_community.document_loaders.parsers import PyPDFParser

loader = GenericLoader(
    blob_loader=FileSystemBlobLoader(
        path="./example_data/",
        glob="*.pdf",
    ),
    blob_parser=PyPDFParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen1 (� ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain
Lee4, Jacob Carlson3, and Weining Li5
1 Allen Institute for AI
[email protected]
2 Brown University
ruochen [email protected]
3 Harvard University
{melissadell,jacob carlson}@fas.harvard.edu
4 University of Washington
[email protected]
5 University of Waterloo
[email protected]
Abstract. Recent advances in document image analysis (DIA) have been
primarily driven by the application of neural networks. Ideally, research
outcomes could be easily deployed in production and extended for further
investigation. However, various factors like loosely organized codebases
and sophisticated model configurations complicate the easy reuse of im-
portant innovations by a wide audience. Though there have been on-going
efforts to improve reusability and simplify deep learning (DL) model
development in disciplines like natural language processing and computer
vision, none of them are optimized for challenges in the domain of DIA.
This represents a major gap in the existing toolkit, as DIA is central to
academic research across a wide range of disciplines in the social sciences
and humanities. This paper introduces LayoutParser, an open-source
library for streamlining the usage of DL in DIA research and applica-
tions. The core LayoutParser library comes with a set of simple and
intuitive interfaces for applying and customizing DL models for layout de-
tection, character recognition, and many other document processing tasks.
To promote extensibility, LayoutParser also incorporates a community
platform for sharing both pre-trained models and full document digiti-
zation pipelines. We demonstrate that LayoutParser is helpful for both
lightweight and large-scale digitization pipelines in real-word use cases.
The library is publicly available at https://layout-parser.github.io.
Keywords: Document Image Analysis · Deep Learning · Layout Analysis
· Character Recognition · Open Source library · Toolkit.
1 Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of
document image analysis (DIA) tasks including document image classification [11,
arXiv:2103.15348v2  [cs.CV]  21 Jun 2021
{'producer': 'pdfTeX-1.40.21',
 'creator': 'LaTeX with hyperref',
 'creationdate': '2021-06-22T01:27:10+00:00',
 'author': '',
 'keywords': '',
 'moddate': '2021-06-22T01:27:10+00:00',
 'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
                    '2020) kpathsea version 6.3.2',
 'subject': '',
 'title': '',
 'trapped': '/False',
 'source': 'example_data/layout-parser-paper.pdf',
 'total_pages': 16,
 'page': 0,
 'page_label': '1'}
클라우드 스토리지의 파일로 작업하는 것도 가능합니다.
from langchain_community.document_loaders import CloudBlobLoader
from langchain_community.document_loaders.generic import GenericLoader

loader = GenericLoader(
    blob_loader=CloudBlobLoader(
        url="s3://mybucket",  # Supports s3://, az://, gs://, file:// schemes.
        glob="*.pdf",
    ),
    blob_parser=PyPDFParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)

API reference

모든 PyPDFLoader 기능 및 구성에 대한 자세한 문서는 API reference를 참조하세요: python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.
I