Recently, I noticed that the K-Business Portal displays official document titles in segments of three months.
What if we could collect all document titles through crawling and compile three years' worth into an Excel file?
It seemed like using a filter to search document numbers would be easy.
To cut to the chase, it was impossible.

1. Everyone has a plan.
My grand plan was this:
1. Access and log in to the business portal using Python's selenium.
2. Use xpath to collect tr and td within table tags and convert them into a data frame.
3. Use datetime and delta to calculate dates and collect all documents every three months.
4. Use them appropriately in Excel or Google Sheets.However, when I actually accessed the business portal, I couldn't find any tables or even iframes.
After fumbling around for four hours, here's why crawling was impossible.
2. What about your security?
1. WebDRM

The first barrier was WebDRM.
This prevented me from enabling developer mode.
However, I tried utilizing the fact that if developer mode was already on, it remained even when accessing the site.
I eventually succeeded in enabling developer mode.
2. It appears in the browser, but the elements are absent?
This part was the hardest to understand.
Although elements were clearly visible in developer mode, they didn't appear when using page_source with the ChromeDriver.
This was the same whether using selenium or puppeteer.
What's even more intriguing is that they don't appear when using JavaScript in the console.
Seeing but not finding, this tantalizing sense drove me even crazier.
const puppeteer = require('puppeteer');
const fs = require('fs');
// Script to fetch the HTML of a web page using puppeteer
(async () => {
let browser;
try {
browser = await puppeteer.launch({
headless: false,
args: ['--no-sandbox', '--disable-setuid-sandbox'],
defaultViewport: null,
userDataDir: './user_data',
});
const page = await browser.newPage();
await page.goto('https://klef.goe.go.kr/keris_ui/main.do', {
waitUntil: 'networkidle0', // Wait until all resources are loaded
timeout: 60000, // Wait for up to 60 seconds
});
const html = await page.content();
fs.writeFileSync('schoolDoc.html', html, 'utf8');
console.log('The HTML file has been saved as schoolDoc.html.');
} catch (error) {
console.error('An error occurred:', error);
} finally {
if (browser) await browser.close();
}
})();Opening the saved file showed the following:
The desired table wasn't there, only the login module.

Thinking about it, I wondered if the table displaying documents was a standalone installed program.
This thought made it easier to give up.
3. The business portal uses WebSockets.
First, the business portal doesn't send data through a typical REST API but uses WebSockets for data exchange with users.
This fact was quite interesting.

Even trying to copy cookies with other libraries like requests for a new connection failed due to the WebSocket handshake and session maintenance.
But I couldn't give up there.
I decided to inspect requests and responses made through WebSockets using selenium-wire.
# Iterate through requests
for request in driver.requests:
if request.response:
content_type = request.response.headers.get('Content-Type', '')
if content_type.startswith('application/json'):
print("== Request URL:", request.url)
try:
body = request.response.body.decode('utf-8', errors='ignore')
print("== Response Body:", body)
except Exception as e:
print("== Decoding Error:", e)
driver.quit()It resulted in alphabetic encodings that seemed like base64.
Trying to decode them failed due to a mismatch in specifications.
That's when I decided to stop going further in.
3. Reflections
I was annoyed by the slowness of public institution sites, but after digging, I realized their security seems solid.
Obtaining even a single piece of data remotely wasn't easy, but I learned a lot.
However, I think I'll try again sometime.

댓글을 불러오는 중...