Repository: lindsey98/Phishpedia
Branch: main
Commit: d030779aad0a
Files: 34
Total size: 130.9 KB
Directory structure:
gitextract_38x9q4gh/
├── .github/
│ └── workflows/
│ ├── codeql.yml
│ ├── lint.yml
│ └── pytest.yml
├── .gitignore
├── LICENSE
├── Plugin_for_Chrome/
│ ├── README.md
│ ├── client/
│ │ ├── background.js
│ │ ├── manifest.json
│ │ └── popup/
│ │ ├── popup.css
│ │ ├── popup.html
│ │ └── popup.js
│ └── server/
│ └── app.py
├── README.md
├── WEBtool/
│ ├── app.py
│ ├── phishpedia_web.py
│ ├── readme.md
│ ├── static/
│ │ ├── css/
│ │ │ ├── sidebar.css
│ │ │ └── style.css
│ │ └── js/
│ │ ├── main.js
│ │ └── sidebar.js
│ ├── templates/
│ │ └── index.html
│ └── utils_web.py
├── configs.py
├── configs.yaml
├── datasets/
│ └── test_sites/
│ └── accounts.g.cdcde.com/
│ ├── html.txt
│ └── info.txt
├── logo_matching.py
├── logo_recog.py
├── models.py
├── phishpedia.py
├── pixi.toml
├── setup.bat
├── setup.sh
└── utils.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/workflows/codeql.yml
================================================
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL Advanced"
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
schedule:
- cron: '22 9 * * 2'
jobs:
analyze:
name: Analyze (${{ matrix.language }})
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners (GitHub.com only)
# Consider using larger runners or machines with greater resources for possible analysis time improvements.
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
permissions:
# required for all workflows
security-events: write
# required to fetch internal or private CodeQL packs
packages: read
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
include:
- language: python
build-mode: none
# CodeQL supports the following values keywords for 'language': 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# ℹ️ Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
'your code, for example:'
echo ' make bootstrap'
echo ' make release'
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"
================================================
FILE: .github/workflows/lint.yml
================================================
name: flake8 Lint
on: [push, pull_request]
jobs:
flake8-lint:
runs-on: ubuntu-latest
name: Lint
steps:
- name: Check out source repository
uses: actions/checkout@v3
- name: Set up Python environment
uses: actions/setup-python@v4
with:
python-version: "3.11"
- name: flake8 Lint
uses: py-actions/flake8@v2
with:
ignore: "E266,W293,W504,E501"
================================================
FILE: .github/workflows/pytest.yml
================================================
name: Pytest CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
# 第一步:检出代码
- name: Checkout code
uses: actions/checkout@v3
# 第二步:设置 Miniconda
- name: Set up Miniconda
uses: conda-incubator/setup-miniconda@v2
with:
auto-update-conda: true # 自动更新 Conda
python-version: '3.9' # 指定 Python 版
activate-environment: phishpedia
# 保存cache
- name: Cache Conda packages and pip cache
uses: actions/cache@v3
with:
path: |
~/.conda/pkgs # 缓存 Conda 包
~/.cache/pip # 缓存 pip 包
phishpedia/lib/python3.9/site-packages # 可选:缓存虚拟环境的 site-packages
key: ${{ runner.os }}-conda-${{ hashFiles('**/environment.yml', '**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-conda-
# 第三步:升级 pip
- name: Upgrade pip
run: |
python -m pip install --upgrade pip
# 第四步:克隆 Phishpedia 仓库并运行 setup.sh
- name: Clone Phishpedia repo and run setup.sh
run: |
git clone https://github.com/lindsey98/Phishpedia.git
cd Phishpedia
chmod +x ./setup.sh
./setup.sh
# 第五步:安装项目依赖和 pytest
- name: Install dependencies and pytest
run: |
conda run -n phishpedia pip install pytest
conda run -n phishpedia pip install validators
# 步骤 6:运行 Pytest 测试
- name: Run Pytest
run: |
conda run -n phishpedia pytest tests/test_logo_matching.py
conda run -n phishpedia pytest tests/test_logo_recog.py
conda run -n phishpedia pytest tests/test_phishpedia.py
================================================
FILE: .gitignore
================================================
*.zip
*.pkl
*.pth*
venv/
__pycache__/
================================================
FILE: LICENSE
================================================
Creative Commons Legal Code
CC0 1.0 Universal
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer
exclusive Copyright and Related Rights (defined below) upon the creator
and subsequent owner(s) (each and all, an "owner") of an original work of
authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for
the purpose of contributing to a commons of creative, cultural and
scientific works ("Commons") that the public can reliably and without fear
of later claims of infringement build upon, modify, incorporate in other
works, reuse and redistribute as freely as possible in any form whatsoever
and for any purposes, including without limitation commercial purposes.
These owners may contribute to the Commons to promote the ideal of a free
culture and the further production of creative, cultural and scientific
works, or to gain reputation or greater distribution for their Work in
part through the use and efforts of others.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he or she
is an owner of Copyright and Related Rights in the Work, voluntarily
elects to apply CC0 to the Work and publicly distribute the Work under its
terms, with knowledge of his or her Copyright and Related Rights in the
Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be
protected by copyright and related or neighboring rights ("Copyright and
Related Rights"). Copyright and Related Rights include, but are not
limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person's image or
likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work,
subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data
in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the
European Parliament and of the Council of 11 March 1996 on the legal
protection of databases, and under any national implementation
thereof, including any amended or successor version of such
directive); and
vii. other similar, equivalent or corresponding rights throughout the
world based on applicable law or treaty, and any national
implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention
of, applicable law, Affirmer hereby overtly, fully, permanently,
irrevocably and unconditionally waives, abandons, and surrenders all of
Affirmer's Copyright and Related Rights and associated claims and causes
of action, whether now known or unknown (including existing as well as
future claims and causes of action), in the Work (i) in all territories
worldwide, (ii) for the maximum duration provided by applicable law or
treaty (including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose whatsoever,
including without limitation commercial, advertising or promotional
purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
member of the public at large and to the detriment of Affirmer's heirs and
successors, fully intending that such Waiver shall not be subject to
revocation, rescission, cancellation, termination, or any other legal or
equitable action to disrupt the quiet enjoyment of the Work by the public
as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason
be judged legally invalid or ineffective under applicable law, then the
Waiver shall be preserved to the maximum extent permitted taking into
account Affirmer's express Statement of Purpose. In addition, to the
extent the Waiver is so judged Affirmer hereby grants to each affected
person a royalty-free, non transferable, non sublicensable, non exclusive,
irrevocable and unconditional license to exercise Affirmer's Copyright and
Related Rights in the Work (i) in all territories worldwide, (ii) for the
maximum duration provided by applicable law or treaty (including future
time extensions), (iii) in any current or future medium and for any number
of copies, and (iv) for any purpose whatsoever, including without
limitation commercial, advertising or promotional purposes (the
"License"). The License shall be deemed effective as of the date CC0 was
applied by Affirmer to the Work. Should any part of the License for any
reason be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the remainder
of the License, and in such case Affirmer hereby affirms that he or she
will not (i) exercise any of his or her remaining Copyright and Related
Rights in the Work or (ii) assert any associated claims and causes of
action with respect to the Work, in either case contrary to Affirmer's
express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned,
surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or
warranties of any kind concerning the Work, express, implied,
statutory or otherwise, including without limitation warranties of
title, merchantability, fitness for a particular purpose, non
infringement, or the absence of latent or other defects, accuracy, or
the present or absence of errors, whether or not discoverable, all to
the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons
that may apply to the Work or any use thereof, including without
limitation any person's Copyright and Related Rights in the Work.
Further, Affirmer disclaims responsibility for obtaining any necessary
consents, permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons is not a
party to this document and has no duty or obligation with respect to
this CC0 or use of the Work.
================================================
FILE: Plugin_for_Chrome/README.md
================================================
# Plugin_for_Chrome
## Project Overview
`Plugin_for_Chrome` is a Chrome extension project designed to detect phishing websites.
The extension automatically retrieves the current webpage's URL and a screenshot when the user presses a predefined hotkey or clicks the extension button, then sends this information to the server for phishing detection. The server utilizes the Flask framework, loads the Phishpedia model for identification, and returns the detection results.
## Directory Structure
```
Plugin_for_Chrome/
├── client/
│ ├── background.js # Handles the extension's background logic, including hotkeys and button click events.
│ ├── manifest.json # Configuration file for the Chrome extension.
│ └── popup/
│ ├── popup.html # HTML file for the extension's popup page.
│ ├── popup.js # JavaScript file for the extension's popup page.
│ └── popup.css # CSS file for the extension's popup page.
└── server/
└── app.py # Main program for the Flask server, handling client requests and invoking the Phishpedia model for detection.
```
## Installation and Usage
### Frontend
1. Open the Chrome browser and navigate to `chrome://extensions/`.
2. Enable Developer Mode.
3. Click on "Load unpacked" and select the `Plugin_for_Chrome` directory.
### Backend
1. Run the Flask server:
```bash
pixi run python -m Plugin_for_Chrome.server.app
```
## Using the Extension
In the Chrome browser, press the hotkey `Ctrl+Shift+H` or click the extension button.
The extension will automatically capture the current webpage's URL and a screenshot, then send them to the server for analysis.
The server will return the detection results, and the extension will display whether the webpage is a phishing site along with the corresponding legitimate website.
## Notes
Ensure that the server is running locally and listening on the default port 5000.
The extension and the server must operate within the same network environment.
## Contributing
Feel free to submit issues and contribute code!
================================================
FILE: Plugin_for_Chrome/client/background.js
================================================
// 处理截图和URL获取
async function captureTabInfo(tab) {
try {
// 获取截图
const screenshot = await chrome.tabs.captureVisibleTab(null, {
format: 'png'
});
// 获取当前URL
const url = tab.url;
// 发送到服务器进行分析
const response = await fetch('http://localhost:5000/analyze', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
url: url,
screenshot: screenshot
})
});
const result = await response.json();
// 将结果发送到popup
chrome.runtime.sendMessage({
type: 'analysisResult',
data: result
});
} catch (error) {
console.error('Error capturing tab info:', error);
chrome.runtime.sendMessage({
type: 'error',
data: error.message
});
}
}
// 监听快捷键命令
chrome.commands.onCommand.addListener(async (command) => {
if (command === '_execute_action') {
const [tab] = await chrome.tabs.query({ active: true, currentWindow: true });
if (tab) {
await captureTabInfo(tab);
}
}
});
// 监听来自popup的消息
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
if (request.type === 'analyze') {
chrome.tabs.query({ active: true, currentWindow: true }, async (tabs) => {
if (tabs[0]) {
await captureTabInfo(tabs[0]);
}
});
}
return true;
});
================================================
FILE: Plugin_for_Chrome/client/manifest.json
================================================
{
"manifest_version": 3,
"name": "Phishing Detector",
"version": "1.0",
"description": "Detect phishing websites using screenshot and URL analysis",
"permissions": [
"activeTab",
"scripting",
"storage",
"tabs"
],
"host_permissions": [
"http://localhost:5000/*"
],
"action": {
"default_popup": "popup/popup.html"
},
"background": {
"service_worker": "background.js"
},
"commands": {
"_execute_action": {
"suggested_key": {
"default": "Ctrl+Shift+H",
"mac": "Command+Shift+H"
},
"description": "Analyze current page for phishing"
}
}
}
================================================
FILE: Plugin_for_Chrome/client/popup/popup.css
================================================
.container {
width: 300px;
padding: 16px;
}
h1 {
font-size: 18px;
margin-bottom: 16px;
}
button {
width: 100%;
padding: 8px;
background-color: #4CAF50;
color: white;
border: none;
border-radius: 4px;
cursor: pointer;
margin-bottom: 16px;
}
button:hover {
background-color: #45a049;
}
.hidden {
display: none;
}
#loading {
text-align: center;
margin: 16px 0;
}
#result {
margin-top: 16px;
}
.safe {
color: #4CAF50;
}
.dangerous {
color: #f44336;
}
.error-message {
color: #f44336;
}
================================================
FILE: Plugin_for_Chrome/client/popup/popup.html
================================================
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<link rel="stylesheet" href="popup.css">
</head>
<body>
<div class="container">
<h1>Phishing Detector</h1>
<button id="analyzeBtn">分析当前页面</button>
<div id="loading" class="hidden">
分析中...
</div>
<div id="result" class="hidden">
<h2>分析结果:</h2>
<div id="status"></div>
<div id="legitUrl" class="hidden">
<h3>对应的正版网站:</h3>
<a id="legitUrlLink" href="#" target="_blank"></a>
</div>
</div>
<div id="error" class="hidden">
<p class="error-message"></p>
</div>
</div>
<script src="popup.js"></script>
</body>
</html>
================================================
FILE: Plugin_for_Chrome/client/popup/popup.js
================================================
document.addEventListener('DOMContentLoaded', () => {
const analyzeBtn = document.getElementById('analyzeBtn');
const loading = document.getElementById('loading');
const result = document.getElementById('result');
const status = document.getElementById('status');
const legitUrl = document.getElementById('legitUrl');
const legitUrlLink = document.getElementById('legitUrlLink');
const error = document.getElementById('error');
// 点击分析按钮
analyzeBtn.addEventListener('click', () => {
// 显示加载状态
loading.classList.remove('hidden');
result.classList.add('hidden');
error.classList.add('hidden');
// 发送消息给background script
chrome.runtime.sendMessage({
type: 'analyze'
});
});
// 监听来自background的消息
chrome.runtime.onMessage.addListener((message) => {
loading.classList.add('hidden');
if (message.type === 'analysisResult') {
result.classList.remove('hidden');
if (message.data.isPhishing) {
status.innerHTML = '<span class="dangerous">⚠️ 警告:这可能是一个钓鱼网站!</span>';
if (message.data.legitUrl) {
legitUrl.classList.remove('hidden');
legitUrlLink.href = message.data.legitUrl;
legitUrlLink.textContent = message.data.brand;
}
} else {
status.innerHTML = '<span class="safe">✓ 这是一个安全的网站</span>';
legitUrl.classList.add('hidden');
}
} else if (message.type === 'error') {
error.classList.remove('hidden');
error.querySelector('.error-message').textContent = message.data;
}
});
});
================================================
FILE: Plugin_for_Chrome/server/app.py
================================================
from flask import Flask, request, jsonify
from flask_cors import CORS
import base64
from io import BytesIO
from PIL import Image
from datetime import datetime
import os
from phishpedia import PhishpediaWrapper, result_file_write
app = Flask(__name__)
CORS(app)
# 在创建应用时初始化模型
with app.app_context():
current_dir = os.path.dirname(os.path.realpath(__file__))
log_dir = os.path.join(current_dir, 'plugin_logs')
os.makedirs(log_dir, exist_ok=True)
phishpedia_cls = PhishpediaWrapper()
@app.route('/analyze', methods=['POST'])
def analyze():
try:
print('Request received')
data = request.get_json()
url = data.get('url')
screenshot_data = data.get('screenshot')
# 解码Base64图片数据
image_data = base64.b64decode(screenshot_data.split(',')[1])
image = Image.open(BytesIO(image_data))
screenshot_path = 'temp_screenshot.png'
image.save(screenshot_path, format='PNG')
# 调用Phishpedia模型进行识别
phish_category, pred_target, matched_domain, \
plotvis, siamese_conf, pred_boxes, \
logo_recog_time, logo_match_time = phishpedia_cls.test_orig_phishpedia(url, screenshot_path, None)
# 添加结果处理逻辑
result = {
"isPhishing": bool(phish_category),
"brand": pred_target if pred_target else "unknown",
"legitUrl": f"https://{matched_domain[0]}" if matched_domain else "unknown",
"confidence": float(siamese_conf) if siamese_conf is not None else 0.0
}
# 记录日志
today = datetime.now().strftime('%Y%m%d')
log_file_path = os.path.join(log_dir, f'{today}_results.txt')
try:
with open(log_file_path, "a+", encoding='ISO-8859-1') as f:
result_file_write(f, current_dir, url, phish_category, pred_target,
matched_domain if matched_domain else ["unknown"],
siamese_conf if siamese_conf is not None else 0.0,
logo_recog_time, logo_match_time)
except UnicodeError:
with open(log_file_path, "a+", encoding='utf-8') as f:
result_file_write(f, current_dir, url, phish_category, pred_target,
matched_domain if matched_domain else ["unknown"],
siamese_conf if siamese_conf is not None else 0.0,
logo_recog_time, logo_match_time)
if os.path.exists(screenshot_path):
os.remove(screenshot_path)
return jsonify(result)
except Exception as e:
print(f"Error in analyze: {str(e)}")
log_error_path = os.path.join(log_dir, 'log_error.txt')
with open(log_error_path, "a+", encoding='utf-8') as f:
f.write(f'{datetime.now().strftime("%Y-%m-%d %H:%M:%S")} - {str(e)}\n')
return jsonify("ERROR"), 500
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000, debug=False)
================================================
FILE: README.md
================================================
# Phishpedia A Hybrid Deep Learning Based Approach to Visually Identify Phishing Webpages
<div align="center">


</div>
<p align="center">
<a href="https://www.usenix.org/conference/usenixsecurity21/presentation/lin">Paper</a> •
<a href="https://sites.google.com/view/phishpedia-site/">Website</a> •
<a href="https://www.youtube.com/watch?v=ZQOH1RW5DmY">Video</a> •
<a href="https://drive.google.com/file/d/12ypEMPRQ43zGRqHGut0Esq2z5en0DH4g/view?usp=drive_link">Dataset</a> •
<a href="#citation">Citation</a>
</p>
- This is the official implementation of "Phishpedia: A Hybrid Deep Learning Based Approach to Visually Identify Phishing Webpages" USENIX'21 [link to paper](https://www.usenix.org/conference/usenixsecurity21/presentation/lin), [link to our website](https://sites.google.com/view/phishpedia-site/), [link to our dataset](https://drive.google.com/file/d/12ypEMPRQ43zGRqHGut0Esq2z5en0DH4g/view?usp=drive_link).
- Existing reference-based phishing detectors:
- :x: Lack of **interpretability**, only give binary decision (legit or phish)
- :x: **Not robust against distribution shift**, because the classifier is biased towards the phishing training set
- :x: **Lack of a large-scale phishing benchmark** dataset
- The contributions of our paper:
- :white_check_mark: We propose a phishing identification system Phishpedia, which has high identification accuracy and low runtime overhead, outperforming the relevant state-of-the-art identification approaches.
- :white_check_mark: We are the first to propose to use **consistency-based method** for phishing detection, in place of the traditional classification-based method. We investigate the consistency between the webpage domain and its brand intention. The detected brand intention provides a **visual explanation** for phishing decision.
- :white_check_mark: Phishpedia is **NOT trained on any phishing dataset**, addressing the potential test-time distribution shift problem.
- :white_check_mark: We release a **30k phishing benchmark dataset**, each website is annotated with its URL, HTML, screenshot, and target brand: https://drive.google.com/file/d/12ypEMPRQ43zGRqHGut0Esq2z5en0DH4g/view?usp=drive_link.
- :white_check_mark: We set up a **phishing monitoring system**, investigating emerging domains fed from CertStream, and we have discovered 1,704 real phishing, out of which 1133 are zero-days not reported by industrial antivirus engine (Virustotal).
## Framework
<img src="./datasets/overview.png" style="width:2000px;height:350px"/>
`Input`: A URL and its screenshot `Output`: Phish/Benign, Phishing target
- Step 1: Enter <b>Deep Object Detection Model</b>, get predicted logos and inputs (inputs are not used for later prediction, just for explanation)
- Step 2: Enter <b>Deep Siamese Model</b>
- If Siamese report no target, `Return Benign, None`
- Else Siamese report a target, `Return Phish, Phishing target`
## Setup
Prerequisite: [Pixi installed](https://pixi.sh/latest/)
For Linux/Mac,
```bash
export KMP_DUPLICATE_LIB_OK=TRUE
git clone https://github.com/lindsey98/Phishpedia.git
cd Phishpedia
pixi install
chmod +x setup.sh
./setup.sh
```
For Windows, in PowerShell,
```bash
git clone https://github.com/lindsey98/Phishpedia.git
cd Phishpedia
pixi install
setup.bat
```
## Running Phishpedia from Command Line
```bash
pixi run python phishpedia.py --folder <folder you want to test e.g. ./datasets/test_sites>
```
The testing folder should be in the structure of:
```
test_site_1
|__ info.txt (Write the URL)
|__ shot.png (Save the screenshot)
test_site_2
|__ info.txt (Write the URL)
|__ shot.png (Save the screenshot)
......
```
## Running Phishpedia as a GUI tool (web-browser-based)
See [WEBtool/](WEBtool/)
## Install Phishpedia as a Chrome plugin
See [Plugin_for_Chrome/](Plugin_for_Chrome/)
## Project structure
```
- models/
|___ rcnn_bet365.pth
|___ faster_rcnn.yaml
|___ resnetv2_rgb_new.pth.tar
|___ expand_targetlist/
|___ Adobe/
|___ Amazon/
|___ ......
|___ domain_map.pkl
- logo_recog.py: Deep Object Detection Model
- logo_matching.py: Deep Siamese Model
- configs.yaml: Configuration file
- phishpedia.py: Main script
```
## Miscellaneous
- In our paper, we also implement several phishing detection and identification baselines, see [here](https://github.com/lindsey98/PhishingBaseline)
- The logo targetlist described in our paper includes 181 brands, we have further expanded the targetlist to include 277 brands in this code repository
- For the phish discovery experiment, we obtain feed from [Certstream phish_catcher](https://github.com/x0rz/phishing_catcher), we lower the score threshold to be 40 to process more suspicious websites, readers can refer to their repo for details
- We use Scrapy for website crawling
## Citation
If you find our work useful in your research, please consider citing our paper by:
```bibtex
@inproceedings{lin2021phishpedia,
title={Phishpedia: A Hybrid Deep Learning Based Approach to Visually Identify Phishing Webpages},
author={Lin, Yun and Liu, Ruofan and Divakaran, Dinil Mon and Ng, Jun Yang and Chan, Qing Zhou and Lu, Yiwen and Si, Yuxuan and Zhang, Fan and Dong, Jin Song},
booktitle={30th $\{$USENIX$\}$ Security Symposium ($\{$USENIX$\}$ Security 21)},
year={2021}
}
```
## Contacts
If you have any issues running our code, you can raise an issue or send an email to liu.ruofan16@u.nus.edu, lin_yun@sjtu.edu.cn, and dcsdjs@nus.edu.sg
================================================
FILE: WEBtool/app.py
================================================
from flask import Flask, request, jsonify
from flask_cors import CORS
import base64
from io import BytesIO
from PIL import Image
from datetime import datetime
import os
from phishpedia import PhishpediaWrapper, result_file_write
app = Flask(__name__)
CORS(app)
# 在创建应用时初始化模型
with app.app_context():
current_dir = os.path.dirname(os.path.realpath(__file__))
log_dir = os.path.join(current_dir, 'plugin_logs')
os.makedirs(log_dir, exist_ok=True)
phishpedia_cls = PhishpediaWrapper()
@app.route('/analyze', methods=['POST'])
def analyze():
try:
print('Request received')
data = request.get_json()
url = data.get('url')
screenshot_data = data.get('screenshot')
# 解码Base64图片数据
image_data = base64.b64decode(screenshot_data.split(',')[1])
image = Image.open(BytesIO(image_data))
screenshot_path = 'temp_screenshot.png'
image.save(screenshot_path, format='PNG')
# 调用Phishpedia模型进行识别
phish_category, pred_target, matched_domain, \
plotvis, siamese_conf, pred_boxes, \
logo_recog_time, logo_match_time = phishpedia_cls.test_orig_phishpedia(url, screenshot_path, None)
# 添加结果处理逻辑
result = {
"isPhishing": bool(phish_category),
"brand": pred_target if pred_target else "unknown",
"legitUrl": f"https://{matched_domain[0]}" if matched_domain else "unknown",
"confidence": float(siamese_conf) if siamese_conf is not None else 0.0
}
# 记录日志
today = datetime.now().strftime('%Y%m%d')
log_file_path = os.path.join(log_dir, f'{today}_results.txt')
try:
with open(log_file_path, "a+", encoding='ISO-8859-1') as f:
result_file_write(f, current_dir, url, phish_category, pred_target,
matched_domain if matched_domain else ["unknown"],
siamese_conf if siamese_conf is not None else 0.0,
logo_recog_time, logo_match_time)
except UnicodeError:
with open(log_file_path, "a+", encoding='utf-8') as f:
result_file_write(f, current_dir, url, phish_category, pred_target,
matched_domain if matched_domain else ["unknown"],
siamese_conf if siamese_conf is not None else 0.0,
logo_recog_time, logo_match_time)
if os.path.exists(screenshot_path):
os.remove(screenshot_path)
return jsonify(result)
except Exception as e:
print(f"Error in analyze: {str(e)}")
log_error_path = os.path.join(log_dir, 'log_error.txt')
with open(log_error_path, "a+", encoding='utf-8') as f:
f.write(f'{datetime.now().strftime("%Y-%m-%d %H:%M:%S")} - {str(e)}\n')
return jsonify("ERROR"), 500
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000, debug=False)
================================================
FILE: WEBtool/phishpedia_web.py
================================================
import os
import shutil
from flask import request, Flask, jsonify, render_template, send_from_directory
from flask_cors import CORS
from utils_web import allowed_file, convert_to_base64, domain_map_add, domain_map_delete, check_port_inuse, initial_upload_folder
from configs import load_config
from phishpedia import PhishpediaWrapper
phishpedia_cls = None
# flask for API server
app = Flask(__name__)
cors = CORS(app, supports_credentials=True)
app.config['CORS_HEADERS'] = 'Content-Type'
app.config['UPLOAD_FOLDER'] = 'static/uploads'
app.config['FILE_TREE_ROOT'] = '../models/expand_targetlist' # 主目录路径
app.config['DOMAIN_MAP_PATH'] = '../models/domain_map.pkl'
@app.route('/')
def index():
"""渲染主页面"""
return render_template('index.html')
@app.route('/upload', methods=['POST'])
def upload_file():
"""处理文件上传请求"""
if 'image' not in request.files:
return jsonify({'error': 'No file part'}), 400
file = request.files['image']
if file.filename == '':
return jsonify({'error': 'No selected file'}), 400
if file and allowed_file(file.filename):
filename = file.filename
if filename.count('.') > 1:
return jsonify({'error': 'Invalid file name'}), 400
elif any(sep in filename for sep in (os.sep, os.altsep)):
return jsonify({'error': 'Invalid file name'}), 400
elif '..' in filename:
return jsonify({'error': 'Invalid file name'}), 400
file_path = os.path.join(app.config['UPLOAD_FOLDER'], filename)
file_path = os.path.normpath(file_path)
if not file_path.startswith(app.config['UPLOAD_FOLDER']):
return jsonify({'error': 'Invalid file path'}), 400
file.save(file_path)
return jsonify({'success': True, 'imageUrl': f'/uploads/{filename}'}), 200
return jsonify({'error': 'Invalid file type'}), 400
@app.route('/uploads/<filename>')
def uploaded_file(filename):
"""提供上传文件的访问路径"""
return send_from_directory(app.config['UPLOAD_FOLDER'], filename)
@app.route('/clear_upload', methods=['POST'])
def delete_image():
data = request.get_json()
image_url = data.get('imageUrl')
if not image_url:
return jsonify({'success': False, 'error': 'No image URL provided'}), 400
try:
# 假设 image_url 是相对于静态目录的路径
filename = image_url.split('/')[-1]
image_path = os.path.join(app.config['UPLOAD_FOLDER'], filename)
image_path = os.path.normpath(image_path)
if not image_path.startswith(app.config['UPLOAD_FOLDER']):
return jsonify({'success': False, 'error': 'Invalid file path'}), 400
os.remove(image_path)
return jsonify({'success': True}), 200
except Exception:
return jsonify({'success': False}), 500
@app.route('/detect', methods=['POST'])
def detect():
data = request.json
url = data.get('url', '')
imageUrl = data.get('imageUrl', '')
filename = imageUrl.split('/')[-1]
screenshot_path = os.path.join(app.config['UPLOAD_FOLDER'], filename)
screenshot_path = os.path.normpath(screenshot_path)
if not screenshot_path.startswith(app.config['UPLOAD_FOLDER']):
return jsonify({'success': False, 'error': 'Invalid file path'}), 400
phish_category, pred_target, matched_domain, plotvis, siamese_conf, _, logo_recog_time, logo_match_time = phishpedia_cls.test_orig_phishpedia(
url, screenshot_path, None)
# 处理检测结果
if phish_category == 0:
if pred_target is None:
result = 'Unknown'
else:
result = 'Benign'
else:
result = 'Phishing'
plot_base64 = convert_to_base64(plotvis)
# 返回检测结果
result = {
'result': result, # 检测结果
'matched_brand': pred_target, # 匹配到的品牌
'correct_domain': matched_domain, # 正确的域名
'confidence': round(float(siamese_conf), 3), # 置信度,直接返回百分比
'detection_time': round(float(logo_recog_time) + float(logo_match_time), 3), # 检测时间
'logo_extraction': plot_base64 # logo标注结果,直接返回图像
}
return jsonify(result)
@app.route('/get-directory', methods=['GET'])
def get_file_tree():
"""
获取主目录的文件树
"""
def build_file_tree(path):
tree = []
try:
for entry in os.listdir(path):
entry_path = os.path.join(path, entry)
entry_path = os.path.normpath(entry_path)
if not entry_path.startswith(path):
continue
if os.path.isdir(entry_path):
tree.append({
'name': entry,
'type': 'directory',
'children': build_file_tree(entry_path) # 递归子目录
})
elif entry.lower().endswith(('.png', '.jpeg', '.jpg')):
tree.append({
'name': entry,
'type': 'file'
})
else:
continue
except PermissionError:
pass # 忽略权限错误
return sorted(tree, key=lambda x: x['name'].lower()) # 按 name 字段排序,不区分大小写
root_path = app.config['FILE_TREE_ROOT']
if not os.path.exists(root_path):
return jsonify({'error': 'Root directory does not exist'}), 404
file_tree = build_file_tree(root_path)
return jsonify({'file_tree': file_tree}), 200
@app.route('/view-file', methods=['GET'])
def view_file():
file_name = request.args.get('file')
file_path = os.path.join(app.config['FILE_TREE_ROOT'], file_name)
file_path = os.path.normpath(file_path)
if not file_path.startswith(app.config['FILE_TREE_ROOT']):
return jsonify({'error': 'Invalid file path'}), 400
if not os.path.exists(file_path):
return jsonify({'error': 'File not found'}), 404
if file_name.lower().endswith(('.png', '.jpeg', '.jpg')):
return send_from_directory(app.config['FILE_TREE_ROOT'], file_name)
return jsonify({'error': 'Unsupported file type'}), 400
@app.route('/add-logo', methods=['POST'])
def add_logo():
if 'logo' not in request.files:
return jsonify({'success': False, 'error': 'No file part'}), 400
logo = request.files['logo']
if logo.filename == '':
return jsonify({'success': False, 'error': 'No selected file'}), 400
if logo and allowed_file(logo.filename):
directory = request.form.get('directory')
if not directory:
return jsonify({'success': False, 'error': 'No directory specified'}), 400
directory_path = os.path.join(app.config['FILE_TREE_ROOT'], directory)
directory_path = os.path.normpath(directory_path)
if not directory_path.startswith(app.config['FILE_TREE_ROOT']):
return jsonify({'success': False, 'error': 'Invalid directory path'}), 400
if not os.path.exists(directory_path):
return jsonify({'success': False, 'error': 'Directory does not exist'}), 400
file_path = os.path.join(directory_path, logo.filename)
file_path = os.path.normpath(file_path)
if not file_path.startswith(directory_path):
return jsonify({'success': False, 'error': 'Invalid file path'}), 400
logo.save(file_path)
return jsonify({'success': True, 'message': 'Logo added successfully'}), 200
return jsonify({'success': False, 'error': 'Invalid file type'}), 400
@app.route('/del-logo', methods=['POST'])
def del_logo():
directory = request.form.get('directory')
filename = request.form.get('filename')
if not directory or not filename:
return jsonify({'success': False, 'error': 'Directory and filename must be specified'}), 400
directory_path = os.path.join(app.config['FILE_TREE_ROOT'], directory)
directory_path = os.path.normpath(directory_path)
if not directory_path.startswith(app.config['FILE_TREE_ROOT']):
return jsonify({'success': False, 'error': 'Invalid directory path'}), 400
file_path = os.path.join(directory_path, filename)
file_path = os.path.normpath(file_path)
if not file_path.startswith(directory_path):
return jsonify({'success': False, 'error': 'Invalid file path'}), 400
if not os.path.exists(file_path):
return jsonify({'success': False, 'error': 'File does not exist'}), 400
try:
os.remove(file_path)
return jsonify({'success': True, 'message': 'Logo deleted successfully'}), 200
except Exception:
return jsonify({'success': False}), 500
@app.route('/add-brand', methods=['POST'])
def add_brand():
brand_name = request.form.get('brandName')
brand_domain = request.form.get('brandDomain')
if not brand_name or not brand_domain:
return jsonify({'success': False, 'error': 'Brand name and domain must be specified'}), 400
# 创建品牌目录
brand_directory_path = os.path.join(app.config['FILE_TREE_ROOT'], brand_name)
brand_directory_path = os.path.normpath(brand_directory_path)
if not brand_directory_path.startswith(app.config['FILE_TREE_ROOT']):
return jsonify({'success': False, 'error': 'Invalid brand directory path'}), 400
if os.path.exists(brand_directory_path):
return jsonify({'success': False, 'error': 'Brand already exists'}), 400
try:
os.makedirs(brand_directory_path)
domain_map_add(brand_name, brand_domain, app.config['DOMAIN_MAP_PATH'])
return jsonify({'success': True, 'message': 'Brand added successfully'}), 200
except Exception:
return jsonify({'success': False}), 500
@app.route('/del-brand', methods=['POST'])
def del_brand():
directory = request.json.get('directory')
if not directory:
return jsonify({'success': False, 'error': 'Directory must be specified'}), 400
directory_path = os.path.join(app.config['FILE_TREE_ROOT'], directory)
directory_path = os.path.normpath(directory_path)
if not directory_path.startswith(app.config['FILE_TREE_ROOT']):
return jsonify({'success': False, 'error': 'Invalid directory path'}), 400
if not os.path.exists(directory_path):
return jsonify({'success': False, 'error': 'Directory does not exist'}), 400
try:
shutil.rmtree(directory_path)
domain_map_delete(directory, app.config['DOMAIN_MAP_PATH'])
return jsonify({'success': True, 'message': 'Brand deleted successfully'}), 200
except Exception:
return jsonify({'success': False}), 500
@app.route('/reload-model', methods=['POST'])
def reload_model():
global phishpedia_cls
try:
load_config(reload_targetlist=True)
# Reinitialize Phishpedia
phishpedia_cls = PhishpediaWrapper()
return jsonify({'success': True, 'message': 'Brand deleted successfully'}), 200
except Exception:
return jsonify({'success': False}), 500
if __name__ == "__main__":
ip_address = '0.0.0.0'
port = 5000
while check_port_inuse(port, ip_address):
port = port + 1
# 加载核心检测逻辑
phishpedia_cls = PhishpediaWrapper()
initial_upload_folder(app.config['UPLOAD_FOLDER'])
app.run(host=ip_address, port=port)
================================================
FILE: WEBtool/readme.md
================================================
# Phishpedia Web Tool
This is a web tool for Phishpedia which provides a user-friendly interface with brand and domain management capabilities, as well as visualization features for phishing detection.
## How to Run
Run the following command in the web tool directory:
```bash
pixi run python WEBtool/phishpedia_web.py
```
you should see an URL after the server is started (http://127.0.0.1:500x). Visit it in your browser.
## User Guide
### 1. Main Page (For phishing detection)

1. **URL Detection**
- Enter the URL to be tested in the "Enter URL" input box
- Click the "Upload Image" button to select the corresponding website screenshot
- Click the "Start Detection!" button to start detection
- Detection results will be displayed below, including text results and visual presentation
2. **Result Display**
- The original image with logo extracted will be displayed in the "Logo Extraction" box
- Detection results will be displayed in the "Detection Result" box, together with a synthetic explanation
- You can clearly see the detected brand identifiers and related information
### 2. Sidebar (For database management)
Click the sidebar button "☰" at top right corner, this will trigger a sidebar showing database at backend.

1. **Brand Management**
- Click "Add Brand" to add a new brand
- Enter brand name and corresponding domains in the form
- Click one brand to select, and click "Delete Brand" to remove the selected brand
- Double-click one brand to see the logo under this brand
2. **Logo Management**
- Click one brand to select, and click "Add Logo" to add brand logos
- Click one logo to select, and click "Delete Logo" to remove selected logo
3. **Data Update**
- After making changes, click the "Reload Model" button
- The system will reload the updated dataset
## Main Features
1. **Phishing Detection**
- URL input and detection
- Screenshot upload and analysis
- Detection result visualization
2. **Brand Management**
- Add/Delete brands
- Add/Delete brand logos
- Domain management
- Model reloading
## Directory Structure
```
WEBtool/
├── static/ # Static resources like css,icon
├── templates/ # Web page
├── phishpedia_web.py # A flask server
├── utils_web.py # Help functions for server
├── readme.md # Documentation
└── requirements.txt # Dependency list
```
================================================
FILE: WEBtool/static/css/sidebar.css
================================================
/* 侧边栏样式 */
.sidebar {
position: fixed;
top: 0;
right: -400px;
width: 300px;
height: 100%;
background-color: #ffffff;
box-shadow: -2px 0 5px rgba(0, 0, 0, 0.1);
transition: right 0.3s ease;
z-index: 1000;
display: flex;
flex-direction: column;
padding: 20px;
}
/* 侧边栏打开时显示 */
.sidebar.open {
right: 0;
}
/* 侧边栏标题 */
.sidebar-header {
display: flex;
justify-content: space-between;
align-items: center;
font-size: 18px;
font-weight: bold;
margin-bottom: 20px;
}
/* 关闭按钮 */
.close-sidebar {
background: none;
border: none;
font-size: 18px;
cursor: pointer;
color: #333;
}
/* 右上角按钮样式 */
.sidebar-toggle {
position: absolute;
top: 15px;
right: 15px;
background: #87CEFA;
color: white;
border: none;
border-radius: 5px;
padding: 10px 15px;
font-size: 18px;
font-weight: bold;
cursor: pointer;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
transition: background-color 0.3s ease;
}
.sidebar-toggle:hover {
background-color: #0056b3;
}
/* 按钮容器样式 */
.sidebar-buttons {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin-bottom: 20px;
justify-content: space-between;
}
/* 按钮基础样式 */
.sidebar-button {
flex: 1 1 calc(50% - 10px);
display: flex;
justify-content: center;
align-items: center;
background-color: #87CEFA;
color: white;
font-size: 14px;
font-weight: bold;
border: none;
border-radius: 3px;
padding: 5px 10px;
cursor: pointer;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
transition: background-color 0.3s ease, transform 0.2s ease;
}
/* 按钮悬停效果 */
.sidebar-button:hover {
background-color: #0056b3;
transform: translateY(-2px);
}
/* 按钮点击效果 */
.sidebar-button:active {
background-color: #003d80;
transform: translateY(0);
}
/* ============ 文件树 ============ */
/* 文件树样式 */
#file-tree-root {
list-style-type: none;
padding-left: 20px;
height: 580px;
max-height: 580px;
overflow-y: auto;
border: 1px solid #ccc;
background-color: white;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
}
.file-item {
margin-bottom: 5px;
}
.file-folder {
cursor: pointer;
}
.folder-name {
display: flex;
align-items: center;
}
.folder-icon {
margin-right: 5px;
}
.file-file {
cursor: pointer;
}
.file-icon {
margin-right: 5px;
}
.hidden {
display: none;
}
.file-folder>ul {
padding-left: 20px;
}
/* 预览框样式 */
#image-preview-box {
position: absolute;
background-color: white;
border: 1px solid #ccc;
padding: 10px;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
max-width: 400px;
max-height: 300px;
overflow: hidden;
}
/* 选中样式 */
.selected {
border: 2px solid #007bff;
padding: 2px;
box-sizing: border-box;
}
/* ============== 表单 ============= */
.form-container {
position: fixed;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
background-color: #ffffff;
padding: 20px 30px;
border-radius: 10px;
box-shadow: 0 4px 15px rgba(0, 0, 0, 0.1);
width: 300px;
max-width: 90%;
z-index: 1001;
}
/* 表单标题 */
.form-container h3 {
font-size: 22px;
font-weight: bold;
color: #333;
margin-bottom: 20px;
text-align: center;
font-family: 'Arial', sans-serif;
}
input[type="label"] {
width: 20%;
}
/* 输入框样式 */
input[type="text"] {
width: 90%;
padding: 12px;
margin: 12px 0;
border: 1px solid #ddd;
border-radius: 8px;
background-color: #f9f9f9;
font-size: 16px;
color: #333;
box-shadow: inset 0 2px 4px rgba(0, 0, 0, 0.1);
transition: border-color 0.3s ease, background-color 0.3s ease;
text-align: center;
}
/* 输入框聚焦效果 */
input[type="text"]:focus {
border-color: #3498db;
background-color: #fff;
outline: none;
}
/* 提交按钮样式 */
button[type="submit"] {
background-color: #3498db;
color: white;
}
/* 取消按钮样式 */
button[type="button"] {
background-color: #7c7c7c;
color: white;
}
/* 表单按钮容器 */
.form-actions {
width: 100%;
display: flex;
justify-content: space-between;
gap: 12px;
margin-top: 20px;
}
/* 提交按钮样式 */
button[type="submit"] {
background-color: #3498db;
color: white;
padding: 10px 20px;
border: none;
border-radius: 5px;
font-size: 14px;
cursor: pointer;
transition: background-color 0.3s ease, transform 0.2s ease;
}
/* 提交按钮悬停效果 */
button[type="submit"]:hover {
background-color: #2980b9;
transform: translateY(-2px);
}
/* 提交按钮点击效果 */
button[type="submit"]:active {
background-color: #1abc9c;
transform: translateY(0);
}
/* 取消按钮样式 */
button[type="button"] {
background-color: #7c7c7c;
color: white;
padding: 10px 20px;
border: none;
border-radius: 5px;
font-size: 14px;
cursor: pointer;
transition: background-color 0.3s ease, transform 0.2s ease;
}
/* 取消按钮悬停效果 */
button[type="button"]:hover {
background-color: #555;
transform: translateY(-2px);
}
/* 取消按钮点击效果 */
button[type="button"]:active {
background-color: #333;
transform: translateY(0);
}
/* 浮层样式 */
#overlay {
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-color: rgba(0, 0, 0, 0.5);
display: flex;
justify-content: center;
align-items: center;
z-index: 1002;
}
/* 转圈动画样式 */
#spinner {
border: 2px solid #f3f3f3;
border-top: 2px solid #3498db;
border-radius: 50%;
width: 16px;
height: 16px;
animation: spin 2s linear infinite;
margin-right: 10px;
}
/* 转圈动画 */
@keyframes spin {
0% {
transform: rotate(0deg);
}
100% {
transform: rotate(360deg);
}
}
/* 浮层中的文本样式 */
#overlay p {
color: white;
font-size: 16px;
font-weight: bold;
text-align: center;
line-height: 16px;
margin: 0;
}
#overlay .spinner-container {
display: flex;
align-items: center;
}
================================================
FILE: WEBtool/static/css/style.css
================================================
body,
html {
margin: 0;
padding: 0;
font-family: Arial, sans-serif;
background-color: #faf4f2;
}
ul {
list-style-type: none;
padding: 0;
}
li {
margin: 5px 0;
}
#header {
display: flex;
align-items: center;
justify-content: flex-start;
position: absolute;
top: 0px;
left: 0px;
background-color: rgba(255, 255, 255, 0.8);
padding: 10px 10px;
border-radius: 5px;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
width: 100%;
margin-bottom: 10px;
}
#logo-icon {
height: 60px;
width: auto;
margin-right: 20px;
}
#logo-text {
display: flex;
align-items: center;
height: 80px;
line-height: 80px;
letter-spacing: 2px;
background: linear-gradient(90deg, #3498db, #f9f388);
-webkit-background-clip: text;
background-clip: text;
-webkit-text-fill-color: transparent;
text-shadow: 1px 1px 3px rgba(0, 0, 0, 0.2);
font-size: 35px;
font-weight: bold;
}
#main-container {
display: flex;
flex-direction: column;
align-items: center;
width: 100%;
margin-top: 130px;
}
#input-container {
display: flex;
flex-direction: column;
align-items: center;
width: 1200px;
padding: 20px;
border-radius: 8px;
border: 1px solid #ddd;
background-color: #dff0fb;
}
.inner-container {
width: 100%;
height: 100%;
display: flex;
flex-direction: column;
align-items: center;
border-radius: 5px;
border: 3px dashed white;
background-color: #eaf4fb;
padding-top: 20px;
padding-bottom: 20px;
}
#output-container {
display: flex;
flex-direction: column;
align-items: center;
width: 1240px;
margin-top: 10px;
}
/* ============================= URL输入区域 =============================*/
#url-input-container {
display: flex;
justify-content: center;
align-items: center;
gap: 10px;
width: 500px;
}
.custom-label {
background-color: #87CEFA;
color: white;
border-radius: 25px;
padding: 10px 20px;
font-size: 16px;
font-weight: bold;
border: none;
text-align: center;
white-space: nowrap;
}
#url-input {
background-color: #dcdcdc;
color: #333;
border: none;
border-radius: 15px;
padding: 10px 20px;
font-size: 16px;
outline: none;
width: 300px;
box-shadow: inset 0 2px 4px rgba(0, 0, 0, 0.1);
}
#url-input::placeholder {
color: #888;
font-style: italic;
}
/* ============================= 图片上传区域 =============================*/
#image-upload-container {
display: flex;
justify-content: center;
align-items: center;
width: 410px;
}
.drop-area {
border: 2px dashed #007BFF;
border-radius: 8px;
background-color: #ffffff;
padding: 20px;
text-align: center;
font-size: 1.2em;
color: #004085;
margin-top: 10px;
width: 100%;
height: 20vh;
margin: 20px auto;
transition: background-color 0.3s ease;
}
.upload-icon {
width: 50px;
height: 50px;
margin-bottom: 10px;
}
.upload-label {
cursor: pointer;
margin-bottom: -10px;
background-color: white;
color: black;
padding: 10px 20px;
border: 2px solid #ccc;
border-radius: 50%;
border-radius: 6px;
text-align: center;
font-size: small;
display: inline-block;
line-height: 1;
font-family: Arial,
sans-serif;
}
.upload-label:hover {
background-color: #f0f0f0;
}
.upload-success-area {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
padding: 20px;
border: 2px dashed #007BFF;
border-radius: 8px;
background-color: #ffffff;
margin-top: 10px;
margin-bottom: 10px;
}
.success-message {
display: flex;
align-items: center;
margin-bottom: 10px;
font-size: larger;
}
.success-icon {
width: 30px;
height: 30px;
margin-right: 5px;
}
.success-text {
font-size: 16px;
}
.uploaded-thumbnail {
width: 400px;
height: auto;
margin-top: 10px;
margin-bottom: 10px;
}
.clear-button {
padding: 10px 20px;
background-color: #888888;
color: white;
border: none;
border-radius: 8px;
font-size: 16px;
font-weight: bold;
cursor: pointer;
transition: background-color 0.3s ease;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
}
.clear-button:hover {
background-color: #555555;
}
#start-detection-button {
background-color: #007BFF;
color: white;
border: none;
border-radius: 25px;
padding: 10px 20px;
font-size: 16px;
font-weight: bold;
cursor: pointer;
margin-top: 0px;
width: 410px;
transition: background-color 0.3s ease;
}
#start-detection-button:hover {
background-color: #0056b3;
}
/* ============================= 结果容器样式 =============================*/
#result-container {
display: flex;
flex-direction: row;
justify-content: space-between;
align-items: flex-start;
width: 100%;
max-width: 1500px;
gap: 20px;
}
#original-image-container,
#detection-result-container {
display: flex;
flex-direction: column;
align-items: center;
width: 50%;
height: 450px;
border: 1px solid #ddd;
border-radius: 10px;
padding-top: 10px;
padding-left: 20px;
padding-right: 20px;
padding-bottom: 20px;
background-color: #ffffff;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
transition: transform 0.3s ease;
}
#original-image-container:hover,
#detection-result-container:hover {
transform: scale(1.02);
transition: transform 0.3s ease;
}
.result_title {
width: 100%;
height: 20px;
margin-top: 0px;
text-align: center;
padding: 10px;
border-radius: 8px;
font-family: Arial,
sans-serif;
font-weight: bold;
font-size: 18px;
}
#logo-extraction-result {
width: 100%;
height: 100%;
display: flex;
justify-content: center;
align-items: center;
overflow: hidden;
margin-top: 10px;
background-color: #f9f9f9;
border: 1px solid #ddd;
border-radius: 8px;
}
#original-image {
max-height: 100%;
max-width: 100%;
object-fit: contain;
}
#detection-result {
width: 100%;
height: 100%;
margin-top: 10px;
text-align: left;
padding: 10px;
background-color: #f9f9f9;
border: 1px solid #ddd;
border-radius: 8px;
}
#detection-label {
display: inline-block;
font-family: Arial, sans-serif;
font-size: 14px;
font-weight: bold;
color: white;
padding: 3px 6px;
border-radius: 16px;
text-align: center;
transition: transform 0.2s, box-shadow 0.2s;
}
#detection-label.benign {
background: linear-gradient(90deg, #4CAF50, #4CAF50);
}
#detection-label.phishing {
background: linear-gradient(90deg, #F44336, #F44336);
}
#detection-label.unknown {
background: linear-gradient(90deg, #9E9E9E, #9E9E9E);
}
#detection-explanation {
font-size: 14px;
color: #333;
}
.separator {
width: 100%;
height: 2px;
background-color: #ddd;
margin: 10px 0;
}
.tasks-list {
list-style: none;
padding: 0;
margin: 0;
}
.tasks-list li {
display: flex;
align-items: center;
justify-content: flex-start;
padding: 8px 0;
border-bottom: 1px solid #eee;
}
.tasks-list li:last-child {
border-bottom: none;
}
.icon {
margin-right: 8px;
font-size: 16px;
}
.task {
font-size: 14px;
color: #555;
margin-right: 12px;
}
.result {
font-size: 14px;
color: #5b5b5b;
background-color: #cdcdcd;
padding: 3px 6px;
border-radius: 10px;
}
#detection-explanation {
font-family: Arial, sans-serif;
font-size: 14px;
line-height: 1.8;
color: #333;
background-color: #f9f9f9;
padding: 16px;
border-left: 4px solid #0078d4;
border-radius: 8px;
box-shadow: 0 2px 6px rgba(0, 0, 0, 0.1);
margin: 16px 0;
}
#detection-explanation p {
margin: 0;
}
#detection-explanation strong {
color: #d9534f;
font-weight: bold;
background-color: #fff0f0;
padding: 2px 4px;
border-radius: 4px;
}
================================================
FILE: WEBtool/static/js/main.js
================================================
new Vue({
el: '#main-container',
data() {
return {
url: '',
result: null,
uploadedImage: null,
imageUrl: '',
uploadSuccess: false,
}
},
methods: {
startDetection() {
if (!this.url) {
alert('Please enter a valid URL.');
return;
}
// 发送 POST 请求到 /detect 路由
fetch('/detect', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
url: this.url,
imageUrl: this.imageUrl
})
})
.then(response => response.json())
.then(data => {
this.result = data; // Update all data
if (data.logo_extraction) { // Logo Extraction Result
document.getElementById('original-image').src = `data:image/png;base64,${data.logo_extraction}`;
}
// Detectoin Result
const labelElement = document.getElementById('detection-label');
const explanationElement = document.getElementById('detection-explanation');
const matched_brand_element = document.getElementById('matched-brand');
const siamese_conf_element = document.getElementById('siamese-conf');
const correct_domain_element = document.getElementById('correct-domain');
const detection_time_element = document.getElementById('detection-time');
detection_time_element.textContent = data.detection_time + ' s';
if (data.result === 'Benign') {
labelElement.className = 'benign';
labelElement.textContent = 'Benign';
matched_brand_element.textContent = data.matched_brand;
siamese_conf_element.textContent = data.confidence;
correct_domain_element.textContent = data.correct_domain;
explanationElement.innerHTML = `
<p>This website has been analyzed and determined to be <strong>${labelElement.textContent.toLowerCase()}</strong>.
Because we have matched a brand <strong>${data.matched_brand}</strong> with confidence <strong>${Math.round(data.confidence * 100, 3)}, </strong>
and the domain extracted from url is within the domain list under the brand (which is <strong>[${data.correct_domain}]</strong>).
Enjoy your surfing!</p>
`;
} else if (data.result === 'Phishing') {
labelElement.className = 'phishing';
labelElement.textContent = 'Phishing';
matched_brand_element.textContent = data.matched_brand;
siamese_conf_element.textContent = data.confidence;
correct_domain_element.textContent = data.correct_domain;
explanationElement.innerHTML = `
<p>This website has been analyzed and determined to be <strong>${labelElement.textContent.toLowerCase()}</strong>.
Because we have matched a brand <strong>${data.matched_brand}</strong> with confidence <strong>${Math.round(data.confidence * 100, 3)}%</strong>,
but the domain extracted from url is NOT within the domain list under the brand (which is <strong>[${data.correct_domain}]</strong>).
Please proceed with caution!</p>
`;
} else {
labelElement.className = 'unknown';
labelElement.textContent = 'Unknown';
matched_brand_element.textContent = "unknown";
siamese_conf_element.textContent = "0.00";
correct_domain_element.textContent = "unknown";
explanationElement.innerHTML = `
<p>Sorry, we don't find any matched brand in database so this website is determined to be <strong>${labelElement.textContent.toLowerCase()}</strong>.</p>
<p>It is still possible that this is a <strong>phishing</strong> site. Please proceed with caution!</p>
`;
}
})
.catch(error => {
console.error('Error:', error);
alert('检测失败,请稍后重试。');
});
},
handleImageUpload(event) { // 处理图片上传事件
const file = event.target.files[0];
if (file) {
this.uploadedImage = file;
this.uploadImage();
}
},
uploadImage() { // 上传图片到服务器
const formData = new FormData();
formData.append('image', this.uploadedImage);
fetch('/upload', { // 假设上传图片的路由是 /upload
method: 'POST',
body: formData
})
.then(response => response.json())
.then(data => {
if (data.success) {
this.imageUrl = data.imageUrl; // 更新图片URL
this.uploadSuccess = true; // 标记上传成功
} else {
alert('上传图片失败: ' + data.error);
}
})
.catch(error => {
console.error('Error:', error);
alert('上传图片失败,请稍后重试。');
});
},
clearUpload() { // 清除上传的图像
fetch('/clear_upload', { // 假设删除图片的路由是 /delete-image
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ imageUrl: this.imageUrl })
})
.then(response => response.json())
.then(data => {
if (data.success) {
this.imageUrl = '';
this.uploadSuccess = false; // 重置上传状态
} else {
alert('删除图片失败: ' + data.error);
}
})
.catch(error => {
console.error('Error:', error);
alert('删除图片失败,请稍后重试。');
});
}
}
});
================================================
FILE: WEBtool/static/js/sidebar.js
================================================
// sidebar.js
new Vue({
el: '#sidebar',
data() {
return {
selectedDirectory: null, // 记录当前选中的目录
selectedFile: null, // 记录当前选中的文件
selectedDirectoryName: '',
selectedFileName: '',
showAddBrandForm: false, // 控制表单显示与隐藏
brandName: '', // 品牌名称
brandDomain: '', // 品牌域名
}
},
mounted() {
// 网页加载时调用 fetchFileTree 函数
this.fetchFileTree();
document.getElementById('logo-file-input').addEventListener('change', this.handleLogoFileSelect);
const sidebar = document.getElementById("sidebar");
const sidebarToggle = document.getElementById("sidebar-toggle");
const closeSidebar = document.getElementById("close-sidebar");
// 点击打开侧边栏
sidebarToggle.addEventListener("click", () => {
sidebar.classList.add("open");
});
// 点击关闭侧边栏
closeSidebar.addEventListener("click", () => {
sidebar.classList.remove("open");
this.clearSelected();
});
// 点击侧边栏外部关闭
document.addEventListener("click", (event) => {
if (!sidebar.contains(event.target) && !sidebarToggle.contains(event.target)) {
sidebar.classList.remove("open");
this.clearSelected();
}
});
},
methods: {
// 递归渲染文件树
renderFileTree(directory, parentPath = '') {
// 获取文件树容器
const fileTreeRoot = document.getElementById('file-tree-root');
fileTreeRoot.innerHTML = ''; // 清空现有内容
// 递归生成文件树节点
const createFileTreeNode = (item, parentPath) => {
const li = document.createElement('li');
li.classList.add('file-item');
const currentPath = parentPath ? `${parentPath}/${item.name}` : item.name;
if (item.type === 'directory') {
li.classList.add('file-folder');
const folderNameContainer = document.createElement('div');
folderNameContainer.classList.add('folder-name');
folderNameContainer.innerHTML = `<i class="folder-icon">📁</i><span>${item.name}</span>`;
li.appendChild(folderNameContainer);
if (item.children) {
const ul = document.createElement('ul');
ul.classList.add('hidden'); // 默认隐藏子目录
item.children.forEach((child) => {
ul.appendChild(createFileTreeNode(child, currentPath)); // 传递当前目录的路径
});
li.appendChild(ul);
// 单击选中目录
folderNameContainer.addEventListener('click', (e) => {
e.stopPropagation();
this.selectDirectory(e, item.name);
});
// 双击展开/隐藏目录
folderNameContainer.addEventListener('dblclick', (e) => {
e.stopPropagation();
ul.classList.toggle('hidden');
});
}
} else {
li.classList.add('file-file');
li.innerHTML = `<i class="file-icon">📄</i><span>${item.name}</span>`;
// 单击选中文件
li.addEventListener('click', (event) => {
this.selectFile(event, item.name, parentPath);
});
}
return li;
};
// 遍历顶层文件和目录
directory.forEach((item) => {
fileTreeRoot.appendChild(createFileTreeNode(item, parentPath));
});
},
// 获取文件树数据
fetchFileTree() {
// 发送请求获取文件树数据
fetch('/get-directory') // 后端文件树接口
.then((response) => response.json())
.then((data) => {
if (data.file_tree) {
this.fileTree = data.file_tree; // 存储文件树数据
this.renderFileTree(this.fileTree); // 渲染文件树
} else {
console.error('Invalid file tree data');
alert('文件树加载失败');
}
})
.catch((error) => {
console.error('Error fetching file tree:', error);
alert('无法加载文件树,请稍后重试。');
});
},
// 选中目录
selectDirectory(event, directoryName) {
const folderNameContainer = event.currentTarget;
if (this.selectedDirectory) {
this.selectedDirectory.classList.remove('selected');
}
if (this.selectedFile) {
this.selectedFile.classList.remove('selected');
}
// 设置当前选中的目录
this.selectedDirectory = folderNameContainer;
this.selectedDirectoryName = directoryName;
folderNameContainer.classList.add('selected');
this.selectedFile = null;
this.selectedFileName = '';
},
// 选中文件
selectFile(event, fileName, parentPath) {
const fileElement = event.currentTarget;
if (this.selectedDirectory) {
this.selectedDirectory.classList.remove('selected');
}
if (this.selectedFile) {
this.selectedFile.classList.remove('selected');
}
// 设置当前选中的文件
this.selectedFile = fileElement;
this.selectedFileName = fileName;
fileElement.classList.add('selected');
this.selectedDirectory = null;
this.selectedDirectoryName = parentPath;
},
// 增加品牌
addBrand() {
this.showAddBrandForm = true;
},
// 关闭添加品牌的表单
closeAddBrandForm() {
this.showAddBrandForm = false;
this.brandName = '';
this.brandDomain = '';
},
// 提交添加品牌的表单
submitAddBrandForm() {
if (!this.brandName || !this.brandDomain) {
alert('Please fill in all fields.');
closeAddBrandForm()
return;
}
const formData = new FormData();
formData.append('brandName', this.brandName);
formData.append('brandDomain', this.brandDomain);
fetch('/add-brand', {
method: 'POST',
body: formData
})
.then(response => response.json())
.then(data => {
if (data.success) {
alert('Brand added successfully.');
this.fetchFileTree();
this.closeAddBrandForm();
} else {
alert('Failed to add brand: ' + data.error);
}
})
.catch(error => {
console.error('Error:', error);
alert('Failed to add brand, please try again.');
});
},
// 删除品牌
delBrand() {
if (this.selectedDirectory == null) {
alert('Please select a brand first.');
return;
}
const formData = new FormData();
formData.append('directory', this.selectedDirectoryName);
fetch('/del-brand', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
directory: this.selectedDirectoryName
})
})
.then(response => response.json())
.then(data => {
if (data.success) {
alert('Brand deletedsuccessfully.');
this.fetchFileTree();
}
})
},
// 增加logo
addLogo() {
console.log('addLogo');
if (this.selectedDirectory == null) {
alert('Please select a brand first.');
return;
}
document.getElementById('logo-file-input').click();
},
handleLogoFileSelect(event) {
const file = event.target.files[0];
if (file) {
const formData = new FormData();
formData.append('logo', file);
formData.append('directory', this.selectedDirectoryName);
fetch('/add-logo', {
method: 'POST',
body: formData
})
.then(response => response.json())
.then(data => {
if (data.success) {
this.fetchFileTree();
} else {
alert('Failed to add logo: ' + data.error);
}
})
.catch(error => {
console.error('Error:', error);
alert('Failed to add logo, please try again.');
});
}
},
// 删除logo
delLogo() {
if (this.selectedFile == null) {
alert('Please select a logo first.');
return;
}
const formData = new FormData();
formData.append('directory', this.selectedDirectoryName);
formData.append('filename', this.selectedFileName);
fetch('/del-logo', {
method: 'POST',
body: formData
})
.then(response => response.json())
.then(data => {
if (data.success) {
this.fetchFileTree();
} else {
alert('Failed to delete logo: ' + data.error);
}
})
.catch(error => {
console.error('Error:', error);
alert('Failed to delete logo, please try again.');
});
},
async reloadModel() {
const overlay = document.getElementById('overlay');
overlay.style.display = 'flex';
try {
const response = await fetch('/reload-model', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
}
});
const data = await response.json();
} catch (error) {
alert('Failed to reload model.');
} finally {
overlay.style.display = 'none';
}
},
clearSelected() {
if (this.selectedDirectory) {
this.selectedDirectory.classList.remove('selected');
this.selectDirectory = null;
}
if (this.selectedFile) {
this.selectedFile.classList.remove('selected');
this.selectFile = null;
}
this.selectedDirectoryName = '';
this.selectedFileName = '';
},
}
});
================================================
FILE: WEBtool/templates/index.html
================================================
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>PhishPedia</title>
<link rel="stylesheet" href="{{ url_for('static', filename='css/style.css') }}">
<link rel="stylesheet" href="{{ url_for('static', filename='css/sidebar.css') }}">
</head>
<body>
<!-- Logo 和图标部分 -->
<div id="header">
<img src="{{ url_for('static', filename='icon/fish.png') }}" alt="Logo" id="logo-icon">
<span id="logo-text">PhishPedia</span>
<button id="sidebar-toggle" class="sidebar-toggle">☰</button>
</div>
<div id="overlay" style="display: none;">
<div class="spinner-container">
<div id="spinner"></div>
<p>Reloading model, this may take some time...</p>
</div>
</div>
<!-- 侧边栏 -->
<div id="sidebar" class="sidebar">
<div class="sidebar-header">
<span>DATABASE</span>
<button id="close-sidebar" class="close-sidebar">✖</button>
</div>
<div class="separator"></div>
<!-- 按钮组 -->
<div class="sidebar-buttons">
<button class="sidebar-button" @click="addBrand">ADD Brand</button>
<button class="sidebar-button" @click="delBrand">DEL Brand</button>
<button class="sidebar-button" @click="addLogo">ADD LOGO</button>
<button class="sidebar-button" @click="delLogo">DEL LOGO</button>
<button class="sidebar-button" @click="reloadModel">Reload Model</button>
</div>
<input type="file" id="logo-file-input" style="display: none;" accept=".png,.jpeg,.jpg">
<div class="separator"></div>
<!-- 文件树容器 -->
<div class="file-tree">
<ul id="file-tree-root" class="file-tree-root">
<!-- 文件树的内容将由 JavaScript 动态生成 -->
</ul>
</div>
<!-- 添加品牌表单 -->
<div v-if="showAddBrandForm" id="add-brand-form" class="form-container">
<form @submit.prevent="submitAddBrandForm">
<h3>Add A New Brand</h3>
<div class="separator"></div>
<label for="brandName">Brand Name</label>
<input type="text" id="brandName" v-model="brandName" required>
<label for="brandDomain">Domain List</label>
<input type="text" id="brandDomain" v-model="brandDomain" required>
<div class="form-actions">
<button type="submit">ADD</button>
<button type="button" @click="closeAddBrandForm">CANCLE</button>
</div>
</form>
</div>
</div>
<!-- 页面居中内容 -->
<div id="main-container">
<div id="input-container">
<div class="inner-container">
<!-- URL 输入框 -->
<div id="url-input-container">
<label for="url-input" class="custom-label">URL</label>
<input type="text" id="url-input" v-model="url" placeholder="Enter URL:" />
</div>
<!-- 图片接收区域 -->
<div id="image-upload-container">
<div id="image-drop-area" class="drop-area" v-if="!uploadSuccess">
<img src="{{ url_for('static', filename='icon/file1.png') }}" alt="Upload Icon"
class="upload-icon" />
<p></p>
<label for="image-upload" class="upload-label">+ Upload Image</label>
<p style="font-size: 14px;">Or ctrl+v here</p>
<input type="file" id="image-upload" accept="image/*" style="display: none;"
@change="handleImageUpload" />
</div>
<div id="upload-success-area" class="upload-success-area" v-if="uploadSuccess">
<div class="success-message">
<img src="{{ url_for('static', filename='icon/succ.png') }}" alt="Success Icon"
class="success-icon" />
<span class="success-text">Uploaded Successfully!</span>
</div>
<img :src="imageUrl" alt="Uploaded Image" class="uploaded-thumbnail" />
<button class="clear-button" @click="clearUpload">clear</button>
</div>
</div>
<!-- 新增的开始检测按钮 -->
<button id="start-detection-button" @click="startDetection">Start Detection !</button>
</div>
</div>
<div id="output-container">
<div id="result-container">
<div id="original-image-container">
<span class="result_title">Logo Extraction</span>
<div id="logo-extraction-result">
<img id="original-image" src="{{ url_for('static', filename='icon/noresult1.png') }}"
alt="Original Webpage Screenshot" />
</div>
</div>
<div id="detection-result-container">
<span class="result_title">Detection Result</span>
<div id="detection-result">
<div>
<span class="icon">📊</span>
<span class="task" style="font-weight: bold;">Result</span>
<div id="detection-label"></div>
</div>
<div class="separator"></div>
<div>
<ul class="tasks-list">
<li>
<span class="icon">🏷️</span>
<span class="task">Matched Brand</span>
<span class="result" id="matched-brand"></span>
</li>
<li>
<span class="icon">💬</span>
<span class="task">Siamese Confidence</span>
<span class="result" id="siamese-conf"></span>
</li>
<li>
<span class="icon">🌐</span>
<span class="task">Correct Domain</span>
<span class="result" id="correct-domain"></span>
</li>
<li>
<span class="icon">⏱️</span>
<span class="task">Detection Time</span>
<span class="result" id="detection-time"></span>
</li>
<li>
<div id="detection-explanation"></div>
</li>
</ul>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Vue.js 和自定义脚本 -->
<script src="https://cdn.jsdelivr.net/npm/vue@2"></script>
<script src="{{ url_for('static', filename='js/main.js') }}"></script>
<script src="{{ url_for('static', filename='js/sidebar.js') }}"></script>
</body>
</html>
================================================
FILE: WEBtool/utils_web.py
================================================
# help function for phishpedia web app
import os
import pickle
import shutil
import socket
import base64
import io
from PIL import Image
import cv2
def check_port_inuse(port, host):
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(1)
s.connect((host, port))
return True
except socket.error:
return False
finally:
if s:
s.close()
def allowed_file(filename):
return '.' in filename and \
filename.rsplit('.', 1)[1].lower() in {'png', 'jpg', 'jpeg'}
def initial_upload_folder(upload_folder):
try:
shutil.rmtree(upload_folder)
except FileNotFoundError:
pass
os.makedirs(upload_folder, exist_ok=True)
def convert_to_base64(image_array):
if image_array is None:
return None
image_array_rgb = cv2.cvtColor(image_array, cv2.COLOR_BGR2RGB)
img = Image.fromarray(image_array_rgb)
buffered = io.BytesIO()
img.save(buffered, format="PNG")
plotvis_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')
return plotvis_base64
def domain_map_add(brand_name, domains_str, domain_map_path):
domains = [domain.strip() for domain in domains_str.split(',') if domain.strip()]
# Load existing domain mapping
with open(domain_map_path, 'rb') as f:
domain_map = pickle.load(f)
# Add new brand and domains
if brand_name in domain_map:
if isinstance(domain_map[brand_name], list):
# Add new domains, avoid duplicates
existing_domains = set(domain_map[brand_name])
for domain in domains:
if domain not in existing_domains:
domain_map[brand_name].append(domain)
else:
# If current value is not a list, convert to list
old_domain = domain_map[brand_name]
domain_map[brand_name] = [old_domain] + [d for d in domains if d != old_domain]
else:
domain_map[brand_name] = domains
# Save updated mapping
with open(domain_map_path, 'wb') as f:
pickle.dump(domain_map, f)
def domain_map_delete(brand_name, domain_map_path):
# Load existing domain mapping
with open(domain_map_path, 'rb') as f:
domain_map = pickle.load(f)
print("before deleting", len(domain_map))
# Delete brand and its domains
if brand_name in domain_map:
del domain_map[brand_name]
print("after deleting", len(domain_map))
# Save updated mapping
with open(domain_map_path, 'wb') as f:
pickle.dump(domain_map, f)
================================================
FILE: configs.py
================================================
# Global configuration
import yaml
from logo_matching import cache_reference_list, load_model_weights
from logo_recog import config_rcnn
import os
import numpy as np
def get_absolute_path(relative_path):
base_path = os.path.dirname(__file__)
return os.path.abspath(os.path.join(base_path, relative_path))
def load_config(reload_targetlist=False):
with open(os.path.join(os.path.dirname(__file__), 'configs.yaml')) as file:
configs = yaml.load(file, Loader=yaml.FullLoader)
# Iterate through the configuration and update paths
for section, settings in configs.items():
for key, value in settings.items():
if 'PATH' in key and isinstance(value, str): # Check if the key indicates a path
absolute_path = get_absolute_path(value)
configs[section][key] = absolute_path
ELE_CFG_PATH = configs['ELE_MODEL']['CFG_PATH']
ELE_WEIGHTS_PATH = configs['ELE_MODEL']['WEIGHTS_PATH']
ELE_CONFIG_THRE = configs['ELE_MODEL']['DETECT_THRE']
ELE_MODEL = config_rcnn(ELE_CFG_PATH,
ELE_WEIGHTS_PATH,
conf_threshold=ELE_CONFIG_THRE)
# siamese model
SIAMESE_THRE = configs['SIAMESE_MODEL']['MATCH_THRE']
print('Load protected logo list')
targetlist_zip_path = configs['SIAMESE_MODEL']['TARGETLIST_PATH']
targetlist_dir = os.path.dirname(targetlist_zip_path)
zip_file_name = os.path.basename(targetlist_zip_path)
targetlist_folder = zip_file_name.split('.zip')[0]
full_targetlist_folder_dir = os.path.join(targetlist_dir, targetlist_folder)
# if reload_targetlist or targetlist_zip_path.endswith('.zip') and not os.path.isdir(full_targetlist_folder_dir):
# os.makedirs(full_targetlist_folder_dir, exist_ok=True)
# subprocess.run(f'unzip -o "{targetlist_zip_path}" -d "{full_targetlist_folder_dir}"', shell=True)
SIAMESE_MODEL = load_model_weights(num_classes=configs['SIAMESE_MODEL']['NUM_CLASSES'],
weights_path=configs['SIAMESE_MODEL']['WEIGHTS_PATH'])
LOGO_FEATS_NAME = 'LOGO_FEATS.npy'
LOGO_FILES_NAME = 'LOGO_FILES.npy'
if reload_targetlist or (not os.path.exists(os.path.join(os.path.dirname(__file__), LOGO_FEATS_NAME))):
LOGO_FEATS, LOGO_FILES = cache_reference_list(model=SIAMESE_MODEL,
targetlist_path=full_targetlist_folder_dir)
print('Finish loading protected logo list')
np.save(os.path.join(os.path.dirname(__file__), LOGO_FEATS_NAME), LOGO_FEATS)
np.save(os.path.join(os.path.dirname(__file__), LOGO_FILES_NAME), LOGO_FILES)
else:
LOGO_FEATS, LOGO_FILES = np.load(os.path.join(os.path.dirname(__file__), LOGO_FEATS_NAME)), \
np.load(os.path.join(os.path.dirname(__file__), LOGO_FILES_NAME))
DOMAIN_MAP_PATH = configs['SIAMESE_MODEL']['DOMAIN_MAP_PATH']
return ELE_MODEL, SIAMESE_THRE, SIAMESE_MODEL, LOGO_FEATS, LOGO_FILES, DOMAIN_MAP_PATH
================================================
FILE: configs.yaml
================================================
ELE_MODEL: # element recognition model -- logo only
CFG_PATH: models/faster_rcnn.yaml # os.path.join(os.path.dirname(__file__), xxx)
WEIGHTS_PATH: models/rcnn_bet365.pth
DETECT_THRE: 0.05
SIAMESE_MODEL:
NUM_CLASSES: 277 # number of brands, users don't need to modify this even the targetlist is expanded
MATCH_THRE: 0.87 # FIXME: threshold is 0.87 in phish-discovery?
WEIGHTS_PATH: models/resnetv2_rgb_new.pth.tar
TARGETLIST_PATH: models/expand_targetlist.zip
DOMAIN_MAP_PATH: models/domain_map.pkl
================================================
FILE: datasets/test_sites/accounts.g.cdcde.com/html.txt
================================================
================================================
FILE: datasets/test_sites/accounts.g.cdcde.com/info.txt
================================================
================================================
FILE: logo_matching.py
================================================
from PIL import Image, ImageOps
from torchvision import transforms
from utils import brand_converter, resolution_alignment, l2_norm
from models import KNOWN_MODELS
import torch
import os
import numpy as np
from collections import OrderedDict
from tqdm import tqdm
from tldextract import tldextract
import pickle
COUNTRY_TLDs = [
".af",
".ax",
".al",
".dz",
".as",
".ad",
".ao",
".ai",
".aq",
".ag",
".ar",
".am",
".aw",
".ac",
".au",
".at",
".az",
".bs",
".bh",
".bd",
".bb",
".eus",
".by",
".be",
".bz",
".bj",
".bm",
".bt",
".bo",
".bq",".an",".nl",
".ba",
".bw",
".bv",
".br",
".io",
".vg",
".bn",
".bg",
".bf",
".mm",
".bi",
".kh",
".cm",
".ca",
".cv",
".cat",
".ky",
".cf",
".td",
".cl",
".cn",
".cx",
".cc",
".co",
".km",
".cd",
".cg",
".ck",
".cr",
".ci",
".hr",
".cu",
".cw",
".cy",
".cz",
".dk",
".dj",
".dm",
".do",
".tl",".tp",
".ec",
".eg",
".sv",
".gq",
".er",
".ee",
".et",
".eu",
".fk",
".fo",
".fm",
".fj",
".fi",
".fr",
".gf",
".pf",
".tf",
".ga",
".gal",
".gm",
".ps",
".ge",
".de",
".gh",
".gi",
".gr",
".gl",
".gd",
".gp",
".gu",
".gt",
".gg",
".gn",
".gw",
".gy",
".ht",
".hm",
".hn",
".hk",
".hu",
".is",
".in",
".id",
".ir",
".iq",
".ie",
".im",
".il",
".it",
".jm",
".jp",
".je",
".jo",
".kz",
".ke",
".ki",
".kw",
".kg",
".la",
".lv",
".lb",
".ls",
".lr",
".ly",
".li",
".lt",
".lu",
".mo",
".mk",
".mg",
".mw",
".my",
".mv",
".ml",
".mt",
".mh",
".mq",
".mr",
".mu",
".yt",
".mx",
".md",
".mc",
".mn",
".me",
".ms",
".ma",
".mz",
".mm",
".na",
".nr",
".np",
".nl",
".nc",
".nz",
".ni",
".ne",
".ng",
".nu",
".nf",
".nc",".tr",
".kp",
".mp",
".no",
".om",
".pk",
".pw",
".ps",
".pa",
".pg",
".py",
".pe",
".ph",
".pn",
".pl",
".pt",
".pr",
".qa",
".ro",
".ru",
".rw",
".re",
".bq",".an",
".bl",".gp",".fr",
".sh",
".kn",
".lc",
".mf",".gp",".fr",
".pm",
".vc",
".ws",
".sm",
".st",
".sa",
".sn",
".rs",
".sc",
".sl",
".sg",
".bq",".an",".nl",
".sx",".an",
".sk",
".si",
".sb",
".so",
".so",
".za",
".gs",
".kr",
".ss",
".es",
".lk",
".sd",
".sr",
".sj",
".sz",
".se",
".ch",
".sy",
".tw",
".tj",
".tz",
".th",
".tg",
".tk",
".to",
".tt",
".tn",
".tr",
".tm",
".tc",
".tv",
".ug",
".ua",
".ae",
".uk",
".us",
".vi",
".uy",
".uz",
".vu",
".va",
".ve",
".vn",
".wf",
".eh",
".ma",
".ye",
".zm",
".zw"
]
def check_domain_brand_inconsistency(logo_boxes,
domain_map_path: str,
model, logo_feat_list,
file_name_list, shot_path: str,
url: str, similarity_threshold: float,
topk: float = 3):
# targetlist domain list
with open(domain_map_path, 'rb') as handle:
domain_map = pickle.load(handle)
print('Number of logo boxes:', len(logo_boxes))
suffix_part = '.'+ tldextract.extract(url).suffix
domain_part = tldextract.extract(url).domain
extracted_domain = domain_part + suffix_part
matched_target, matched_domain, matched_coord, this_conf = None, None, None, None
if len(logo_boxes) > 0:
# siamese prediction for logo box
for i, coord in enumerate(logo_boxes):
if i == topk:
break
min_x, min_y, max_x, max_y = coord
bbox = [float(min_x), float(min_y), float(max_x), float(max_y)]
matched_target, matched_domain, this_conf = pred_brand(model, domain_map,
logo_feat_list, file_name_list,
shot_path, bbox,
similarity_threshold=similarity_threshold,
grayscale=False,
do_aspect_ratio_check=False,
do_resolution_alignment=False)
# print(target_this, domain_this, this_conf)
# domain matcher to avoid FP
if matched_target and matched_domain:
matched_coord = coord
matched_domain_parts = [tldextract.extract(x).domain for x in matched_domain]
matched_suffix_parts = [tldextract.extract(x).suffix for x in matched_domain]
# If the webpage domain exactly aligns with the target website's domain => Benign
if extracted_domain in matched_domain:
matched_target, matched_domain = None, None # Clear if domains are consistent
elif domain_part in matched_domain_parts: # # elIf only the 2nd-level-domains align, and the tld is regional => Benign
if "." + suffix_part.split('.')[-1] in COUNTRY_TLDs:
matched_target, matched_domain = None, None
else:
break # Inconsistent domain found, break the loop
else:
break # Inconsistent domain found, break the loop
return brand_converter(matched_target), matched_domain, matched_coord, this_conf
def load_model_weights(num_classes: int, weights_path: str):
'''
:param num_classes: number of protected brands
:param weights_path: siamese weights
:return model: siamese model
'''
# Initialize model
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = KNOWN_MODELS["BiT-M-R50x1"](head_size=num_classes, zero_head=True)
# Load weights
weights = torch.load(weights_path, map_location='cpu')
weights = weights['model'] if 'model' in weights.keys() else weights
new_state_dict = OrderedDict()
for k, v in weights.items():
if 'module.' in k:
name = k.split('module.')[1]
else:
name = k
new_state_dict[name] = v
model.load_state_dict(new_state_dict)
model.to(device)
model.eval()
return model
def cache_reference_list(model, targetlist_path: str, grayscale=False):
'''
cache the embeddings of the reference list
:param targetlist_path: targetlist folder
:param grayscale: convert logo to grayscale or not, default is RGB
:return logo_feat_list: targetlist embeddings
:return file_name_list: targetlist paths
'''
# Prediction for targetlists
logo_feat_list = []
file_name_list = []
target_list = os.listdir(targetlist_path)
for target in tqdm(target_list):
if target.startswith('.'): # skip hidden files
continue
logo_list = os.listdir(os.path.join(targetlist_path, target))
for logo_path in logo_list:
# List of valid image extensions
valid_extensions = ['.png', 'PNG', '.jpeg', '.jpg', '.JPG', '.JPEG']
if any(logo_path.endswith(ext) for ext in valid_extensions):
skip_prefixes = ['loginpage', 'homepage']
if any(logo_path.startswith(prefix) for prefix in skip_prefixes): # skip homepage/loginpage
continue
try:
logo_feat_list.append(get_embedding(img=os.path.join(targetlist_path, target, logo_path),
model=model, grayscale=grayscale))
file_name_list.append(str(os.path.join(targetlist_path, target, logo_path)))
except OSError:
print(f"Error opening image: {os.path.join(targetlist_path, target, logo_path)}")
continue
return logo_feat_list, file_name_list
@torch.no_grad()
def get_embedding(img, model, grayscale=False):
'''
Inference for a single image
:param img: image path in str or image in PIL.Image
:param model: model to make inference
:param grayscale: convert image to grayscale or not
:return feature embedding of shape (2048,)
'''
# img_size = 224
img_size = 128
mean = [0.5, 0.5, 0.5]
std = [0.5, 0.5, 0.5]
device = 'cuda' if torch.cuda.is_available() else 'cpu'
img_transforms = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std),
])
img = Image.open(img) if isinstance(img, str) else img
img = img.convert("L").convert("RGB") if grayscale else img.convert("RGB")
## Resize the image while keeping the original aspect ratio
pad_color = 255 if grayscale else (255, 255, 255)
img = ImageOps.expand(
img,
(
(max(img.size) - img.size[0]) // 2,
(max(img.size) - img.size[1]) // 2,
(max(img.size) - img.size[0]) // 2,
(max(img.size) - img.size[1]) // 2
),
fill=pad_color
)
img = img.resize((img_size, img_size))
# Predict the embedding
img = img_transforms(img)
img = img[None, ...].to(device)
logo_feat = model.features(img)
logo_feat = l2_norm(logo_feat).squeeze(0).cpu().numpy() # L2-normalization final shape is (2048,)
return logo_feat
def chunked_dot(logo_feat_list, img_feat, chunk_size=128):
sim_list = []
for start in range(0, logo_feat_list.shape[0], chunk_size):
end = start + chunk_size
chunk = logo_feat_list[start:end]
sim_chunk = np.dot(chunk, img_feat.T) # shape: (chunk_size, M)
sim_list.extend(sim_chunk)
return sim_list
def pred_brand(model, domain_map, logo_feat_list, file_name_list, shot_path: str, gt_bbox, similarity_threshold,
grayscale=False,
do_resolution_alignment=True,
do_aspect_ratio_check=True):
'''
Return predicted brand for one cropped image
:param model: model to use
:param domain_map: brand-domain dictionary
:param logo_feat_list: reference logo feature embeddings
:param file_name_list: reference logo paths
:param shot_path: path to the screenshot
:param gt_bbox: 1x4 np.ndarray/list/tensor bounding box coords
:param similarity_threshold: similarity threshold for siamese
:param do_resolution_alignment: if the similarity does not exceed the threshold, do we align their resolutions to have a retry
:param do_aspect_ratio_check: once two logos are similar, whether we want to a further check on their aspect ratios
:param grayscale: convert image(cropped) to grayscale or not
:return: predicted target, predicted target's domain
'''
try:
img = Image.open(shot_path)
except OSError: # if the image cannot be identified, return nothing
print('Screenshot cannot be open')
return None, None, None
# get predicted box --> crop from screenshot
cropped = img.crop((gt_bbox[0], gt_bbox[1], gt_bbox[2], gt_bbox[3]))
img_feat = get_embedding(cropped, model, grayscale=grayscale)
# get cosine similarity with every protected logo
sim_list = chunked_dot(logo_feat_list, img_feat) # take dot product for every pair of embeddings (Cosine Similarity)
pred_brand_list = file_name_list
assert len(sim_list) == len(pred_brand_list)
# get top 3 brands
idx = np.argsort(sim_list)[::-1][:3]
pred_brand_list = np.array(pred_brand_list)[idx]
sim_list = np.array(sim_list)[idx]
# top1,2,3 candidate logos
top3_brandlist = [brand_converter(os.path.basename(os.path.dirname(x))) for x in pred_brand_list]
top3_domainlist = [domain_map[x] for x in top3_brandlist]
top3_simlist = sim_list
for j in range(3):
predicted_brand, predicted_domain = None, None
# If we are trying those lower rank logo, the predicted brand of them should be the same as top1 logo, otherwise might be false positive
if top3_brandlist[j] != top3_brandlist[0]:
continue
# If the largest similarity exceeds threshold
if top3_simlist[j] >= similarity_threshold:
predicted_brand = top3_brandlist[j]
predicted_domain = top3_domainlist[j]
final_sim = top3_simlist[j]
# Else if not exceed, try resolution alignment, see if can improve
elif do_resolution_alignment:
orig_candidate_logo = Image.open(pred_brand_list[j])
cropped, candidate_logo = resolution_alignment(cropped, orig_candidate_logo)
img_feat = get_embedding(cropped, model, grayscale=grayscale)
logo_feat = get_embedding(candidate_logo, model, grayscale=grayscale)
final_sim = logo_feat.dot(img_feat)
if final_sim >= similarity_threshold:
predicted_brand = top3_brandlist[j]
predicted_domain = top3_domainlist[j]
else:
break # no hope, do not try other lower rank logos
## If there is a prediction, do aspect ratio check
if predicted_brand is not None:
if do_aspect_ratio_check:
orig_candidate_logo = Image.open(pred_brand_list[j])
ratio_crop = cropped.size[0] / cropped.size[1]
ratio_logo = orig_candidate_logo.size[0] / orig_candidate_logo.size[1]
# aspect ratios of matched pair must not deviate by more than factor of 2.5
if max(ratio_crop, ratio_logo) / min(ratio_crop, ratio_logo) > 2.5:
continue # did not pass aspect ratio check, try other
return predicted_brand, predicted_domain, final_sim
return None, None, top3_simlist[0]
================================================
FILE: logo_recog.py
================================================
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor
import cv2
import numpy as np
import torch
def pred_rcnn(im, predictor):
'''
Perform inference for RCNN
:param im:
:param predictor:
:return:
'''
im = cv2.imread(im)
if im is not None:
if im.shape[-1] == 4:
im = cv2.cvtColor(im, cv2.COLOR_BGRA2BGR)
else:
print(f"Image at path {im} is None")
return None
outputs = predictor(im)
instances = outputs['instances']
pred_classes = instances.pred_classes # tensor
pred_boxes = instances.pred_boxes # Boxes object
logo_boxes = pred_boxes[pred_classes == 1].tensor
return logo_boxes
def config_rcnn(cfg_path, weights_path, conf_threshold):
'''
Configure weights and confidence threshold
:param cfg_path:
:param weights_path:
:param conf_threshold:
:return:
'''
cfg = get_cfg()
cfg.merge_from_file(cfg_path)
cfg.MODEL.WEIGHTS = weights_path
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = conf_threshold
# uncomment if you installed detectron2 cpu version
if not torch.cuda.is_available():
cfg.MODEL.DEVICE = 'cpu'
# Initialize model
predictor = DefaultPredictor(cfg)
return predictor
COLORS = {
0: (255, 255, 0), # logo
1: (36, 255, 12), # input
2: (0, 255, 255), # button
3: (0, 0, 255), # label
4: (255, 0, 0) # block
}
def vis(img_path, pred_boxes):
'''
Visualize rcnn predictions
:param img_path: str
:param pred_boxes: torch.Tensor of shape Nx4, bounding box coordinates in (x1, y1, x2, y2)
:param pred_classes: torch.Tensor of shape Nx1 0 for logo, 1 for input, 2 for button, 3 for label(text near input), 4 for block
:return None
'''
check = cv2.imread(img_path)
if pred_boxes is None or len(pred_boxes) == 0:
print("Pred_boxes is None or the length of pred_boxes is 0")
return check
pred_boxes = pred_boxes.numpy() if not isinstance(pred_boxes, np.ndarray) else pred_boxes
# draw rectangle
for j, box in enumerate(pred_boxes):
if j == 0:
cv2.rectangle(check, (int(box[0]), int(box[1])), (int(box[2]), int(box[3])), COLORS[0], 2)
else:
cv2.rectangle(check, (int(box[0]), int(box[1])), (int(box[2]), int(box[3])), COLORS[1], 2)
return check
================================================
FILE: models.py
================================================
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Bottleneck ResNet v2 with GroupNorm and Weight Standardization."""
from collections import OrderedDict # pylint: disable=g-importing-member
import torch
import torch.nn as nn
import torch.nn.functional as F
class StdConv2d(nn.Conv2d):
def forward(self, x):
w = self.weight
v, m = torch.var_mean(w, dim=[1, 2, 3], keepdim=True, unbiased=False)
w = (w - m) / torch.sqrt(v + 1e-10)
return F.conv2d(x, w, self.bias, self.stride, self.padding,
self.dilation, self.groups)
def conv3x3(cin, cout, stride=1, groups=1, bias=False):
return StdConv2d(cin, cout, kernel_size=3, stride=stride,
padding=1, bias=bias, groups=groups)
def conv1x1(cin, cout, stride=1, bias=False):
return StdConv2d(cin, cout, kernel_size=1, stride=stride,
padding=0, bias=bias)
def tf2th(conv_weights):
"""Possibly convert HWIO to OIHW."""
if conv_weights.ndim == 4:
conv_weights = conv_weights.transpose([3, 2, 0, 1])
return torch.from_numpy(conv_weights)
class PreActBottleneck(nn.Module):
"""Pre-activation (v2) bottleneck block.
Follows the implementation of "Identity Mappings in Deep Residual Networks":
https://github.com/KaimingHe/resnet-1k-layers/blob/master/resnet-pre-act.lua
Except it puts the stride on 3x3 conv when available.
"""
def __init__(self, cin, cout=None, cmid=None, stride=1):
super().__init__()
cout = cout or cin
cmid = cmid or cout // 4
self.gn1 = nn.GroupNorm(32, cin)
self.conv1 = conv1x1(cin, cmid)
self.gn2 = nn.GroupNorm(32, cmid)
self.conv2 = conv3x3(cmid, cmid, stride) # Original code has it on conv1!!
self.gn3 = nn.GroupNorm(32, cmid)
self.conv3 = conv1x1(cmid, cout)
self.relu = nn.ReLU(inplace=True)
if (stride != 1 or cin != cout):
# Projection also with pre-activation according to paper.
self.downsample = conv1x1(cin, cout, stride)
def forward(self, x):
out = self.relu(self.gn1(x))
# Residual branch
residual = x
if hasattr(self, 'downsample'):
residual = self.downsample(out)
# Unit's branch
out = self.conv1(out)
out = self.conv2(self.relu(self.gn2(out)))
out = self.conv3(self.relu(self.gn3(out)))
return out + residual
def load_from(self, weights, prefix=''):
convname = 'standardized_conv2d'
with torch.no_grad():
self.conv1.weight.copy_(tf2th(weights[f'{prefix}a/{convname}/kernel']))
self.conv2.weight.copy_(tf2th(weights[f'{prefix}b/{convname}/kernel']))
self.conv3.weight.copy_(tf2th(weights[f'{prefix}c/{convname}/kernel']))
self.gn1.weight.copy_(tf2th(weights[f'{prefix}a/group_norm/gamma']))
self.gn2.weight.copy_(tf2th(weights[f'{prefix}b/group_norm/gamma']))
self.gn3.weight.copy_(tf2th(weights[f'{prefix}c/group_norm/gamma']))
self.gn1.bias.copy_(tf2th(weights[f'{prefix}a/group_norm/beta']))
self.gn2.bias.copy_(tf2th(weights[f'{prefix}b/group_norm/beta']))
self.gn3.bias.copy_(tf2th(weights[f'{prefix}c/group_norm/beta']))
if hasattr(self, 'downsample'):
w = weights[f'{prefix}a/proj/{convname}/kernel']
self.downsample.weight.copy_(tf2th(w))
class ResNetV2(nn.Module):
"""Implementation of Pre-activation (v2) ResNet mode."""
def __init__(self, block_units, width_factor, head_size=21843, zero_head=False):
super().__init__()
wf = width_factor # shortcut 'cause we'll use it a lot.
# The following will be unreadable if we split lines.
# pylint: disable=line-too-long
self.root = nn.Sequential(OrderedDict([
('conv', StdConv2d(3, 64 * wf, kernel_size=7, stride=2, padding=3, bias=False)),
('pad', nn.ConstantPad2d(1, 0)),
('pool', nn.MaxPool2d(kernel_size=3, stride=2, padding=0)),
# The following is subtly not the same!
# ('pool', nn.MaxPool2d(kernel_size=3, stride=2, padding=1)),
]))
self.body = nn.Sequential(OrderedDict([
('block1', nn.Sequential(OrderedDict(
[('unit01', PreActBottleneck(cin=64 * wf, cout=256 * wf, cmid=64 * wf))] +
[(f'unit{i:02d}', PreActBottleneck(cin=256 * wf, cout=256 * wf, cmid=64 * wf)) for i in
range(2, block_units[0] + 1)],
))),
('block2', nn.Sequential(OrderedDict(
[('unit01', PreActBottleneck(cin=256 * wf, cout=512 * wf, cmid=128 * wf, stride=2))] +
[(f'unit{i:02d}', PreActBottleneck(cin=512 * wf, cout=512 * wf, cmid=128 * wf)) for i in
range(2, block_units[1] + 1)],
))),
('block3', nn.Sequential(OrderedDict(
[('unit01', PreActBottleneck(cin=512 * wf, cout=1024 * wf, cmid=256 * wf, stride=2))] +
[(f'unit{i:02d}', PreActBottleneck(cin=1024 * wf, cout=1024 * wf, cmid=256 * wf)) for i in
range(2, block_units[2] + 1)],
))),
('block4', nn.Sequential(OrderedDict(
[('unit01', PreActBottleneck(cin=1024 * wf, cout=2048 * wf, cmid=512 * wf, stride=2))] +
[(f'unit{i:02d}', PreActBottleneck(cin=2048 * wf, cout=2048 * wf, cmid=512 * wf)) for i in
range(2, block_units[3] + 1)],
))),
]))
# pylint: enable=line-too-long
self.zero_head = zero_head
self.head = nn.Sequential(OrderedDict([
('gn', nn.GroupNorm(32, 2048 * wf)),
('relu', nn.ReLU(inplace=True)),
('avg', nn.AdaptiveAvgPool2d(output_size=1)),
('conv', nn.Conv2d(2048 * wf, head_size, kernel_size=1, bias=True)),
]))
def features(self, x):
x = self.head[:-1](self.body(self.root(x)))
return x.squeeze(-1).squeeze(-1)
def forward(self, x):
x = self.head(self.body(self.root(x)))
assert x.shape[-2:] == (1, 1) # We should have no spatial shape left.
return x[..., 0, 0]
def load_from(self, weights, prefix='resnet/'):
with torch.no_grad():
self.root.conv.weight.copy_(
tf2th(weights[f'{prefix}root_block/standardized_conv2d/kernel'])) # pylint: disable=line-too-long
self.head.gn.weight.copy_(tf2th(weights[f'{prefix}group_norm/gamma']))
self.head.gn.bias.copy_(tf2th(weights[f'{prefix}group_norm/beta']))
if self.zero_head:
nn.init.zeros_(self.head.conv.weight)
nn.init.zeros_(self.head.conv.bias)
else:
self.head.conv.weight.copy_(
tf2th(weights[f'{prefix}head/conv2d/kernel'])) # pylint: disable=line-too-long
self.head.conv.bias.copy_(tf2th(weights[f'{prefix}head/conv2d/bias']))
for bname, block in self.body.named_children():
for uname, unit in block.named_children():
unit.load_from(weights, prefix=f'{prefix}{bname}/{uname}/')
KNOWN_MODELS = OrderedDict([
('BiT-M-R50x1', lambda *a, **kw: ResNetV2([3, 4, 6, 3], 1, *a, **kw)),
('BiT-M-R50x3', lambda *a, **kw: ResNetV2([3, 4, 6, 3], 3, *a, **kw)),
('BiT-M-R101x1', lambda *a, **kw: ResNetV2([3, 4, 23, 3], 1, *a, **kw)),
('BiT-M-R101x3', lambda *a, **kw: ResNetV2([3, 4, 23, 3], 3, *a, **kw)),
('BiT-M-R152x2', lambda *a, **kw: ResNetV2([3, 8, 36, 3], 2, *a, **kw)),
('BiT-M-R152x4', lambda *a, **kw: ResNetV2([3, 8, 36, 3], 4, *a, **kw)),
('BiT-S-R50x1', lambda *a, **kw: ResNetV2([3, 4, 6, 3], 1, *a, **kw)),
('BiT-S-R50x3', lambda *a, **kw: ResNetV2([3, 4, 6, 3], 3, *a, **kw)),
('BiT-S-R101x1', lambda *a, **kw: ResNetV2([3, 4, 23, 3], 1, *a, **kw)),
('BiT-S-R101x3', lambda *a, **kw: ResNetV2([3, 4, 23, 3], 3, *a, **kw)),
('BiT-S-R152x2', lambda *a, **kw: ResNetV2([3, 8, 36, 3], 2, *a, **kw)),
('BiT-S-R152x4', lambda *a, **kw: ResNetV2([3, 8, 36, 3], 4, *a, **kw)),
])
================================================
FILE: phishpedia.py
================================================
import time
from datetime import datetime
import argparse
import os
import torch
import cv2
from configs import load_config
from logo_recog import pred_rcnn, vis
from logo_matching import check_domain_brand_inconsistency
from tqdm import tqdm
import re
os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'
def result_file_write(f, folder, url, phish_category, pred_target, matched_domain, siamese_conf, logo_recog_time,
logo_match_time):
f.write(folder + "\t")
f.write(url + "\t")
f.write(str(phish_category) + "\t")
f.write(str(pred_target) + "\t") # write top1 prediction only
f.write(str(matched_domain) + "\t")
f.write(str(siamese_conf) + "\t")
f.write(str(round(logo_recog_time, 4)) + "\t")
f.write(str(round(logo_match_time, 4)) + "\n")
class PhishpediaWrapper:
_caller_prefix = "PhishpediaWrapper"
_DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
def __init__(self):
self._load_config()
def _load_config(self):
self.ELE_MODEL, self.SIAMESE_THRE, self.SIAMESE_MODEL, \
self.LOGO_FEATS, self.LOGO_FILES, \
self.DOMAIN_MAP_PATH = load_config()
print(f'Length of reference list = {len(self.LOGO_FEATS)}')
def test_orig_phishpedia(self, url, screenshot_path, html_path):
# 0 for benign, 1 for phish, default is benign
phish_category = 0
pred_target = None
matched_domain = None
siamese_conf = None
plotvis = None
logo_match_time = 0
print("Entering phishpedia")
####################### Step1: Logo detector ##############################################
start_time = time.time()
pred_boxes = pred_rcnn(im=screenshot_path, predictor=self.ELE_MODEL)
logo_recog_time = time.time() - start_time
if pred_boxes is not None:
pred_boxes = pred_boxes.detach().cpu().numpy()
plotvis = vis(screenshot_path, pred_boxes)
# If no element is reported
if pred_boxes is None or len(pred_boxes) == 0:
print('No logo is detected')
return phish_category, pred_target, matched_domain, plotvis, siamese_conf, pred_boxes, logo_recog_time, logo_match_time
######################## Step2: Siamese (Logo matcher) ########################################
start_time = time.time()
pred_target, matched_domain, matched_coord, siamese_conf = check_domain_brand_inconsistency(
logo_boxes=pred_boxes,
domain_map_path=self.DOMAIN_MAP_PATH,
model=self.SIAMESE_MODEL,
logo_feat_list=self.LOGO_FEATS,
file_name_list=self.LOGO_FILES,
url=url,
shot_path=screenshot_path,
similarity_threshold=self.SIAMESE_THRE,
topk=1)
logo_match_time = time.time() - start_time
if pred_target is None:
print('Did not match to any brand, report as benign')
return phish_category, pred_target, matched_domain, plotvis, siamese_conf, pred_boxes, logo_recog_time, logo_match_time
print('Match to Target: {} with confidence {:.4f}'.format(pred_target, siamese_conf))
phish_category = 1
# Visualize, add annotations
cv2.putText(plotvis, "Target: {} with confidence {:.4f}".format(pred_target, siamese_conf),
(int(matched_coord[0] + 20), int(matched_coord[1] + 20)),
cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 0), 2)
return phish_category, pred_target, matched_domain, plotvis, siamese_conf, pred_boxes, logo_recog_time, logo_match_time
if __name__ == '__main__':
'''run'''
today = datetime.now().strftime('%Y%m%d')
parser = argparse.ArgumentParser()
parser.add_argument("--folder", required=True, type=str)
parser.add_argument("--output_txt", default=f'{today}_results.txt', help="Output txt path")
args = parser.parse_args()
request_dir = args.folder
phishpedia_cls = PhishpediaWrapper()
result_txt = args.output_txt
os.makedirs(request_dir, exist_ok=True)
for folder in tqdm(os.listdir(request_dir)):
html_path = os.path.join(request_dir, folder, "html.txt")
screenshot_path = os.path.join(request_dir, folder, "shot.png")
info_path = os.path.join(request_dir, folder, 'info.txt')
if not os.path.exists(screenshot_path):
continue
if not os.path.exists(html_path):
html_path = os.path.join(request_dir, folder, "index.html")
with open(info_path, 'r') as file:
url = file.read()
if os.path.exists(result_txt):
with open(result_txt, 'r', encoding='ISO-8859-1') as file:
if url in file.read():
continue
_forbidden_suffixes = r"\.(mp3|wav|wma|ogg|mkv|zip|tar|xz|rar|z|deb|bin|iso|csv|tsv|dat|txt|css|log|xml|sql|mdb|apk|bat|exe|jar|wsf|fnt|fon|otf|ttf|ai|bmp|gif|ico|jp(e)?g|png|ps|psd|svg|tif|tiff|cer|rss|key|odp|pps|ppt|pptx|c|class|cpp|cs|h|java|sh|swift|vb|odf|xlr|xls|xlsx|bak|cab|cfg|cpl|cur|dll|dmp|drv|icns|ini|lnk|msi|sys|tmp|3g2|3gp|avi|flv|h264|m4v|mov|mp4|mp(e)?g|rm|swf|vob|wmv|doc(x)?|odt|rtf|tex|wks|wps|wpd)$"
if re.search(_forbidden_suffixes, url, re.IGNORECASE):
continue
phish_category, pred_target, matched_domain, \
plotvis, siamese_conf, pred_boxes, \
logo_recog_time, logo_match_time = phishpedia_cls.test_orig_phishpedia(url, screenshot_path, html_path)
try:
with open(result_txt, "a+", encoding='ISO-8859-1') as f:
result_file_write(f, folder, url, phish_category, pred_target, matched_domain, siamese_conf,
logo_recog_time, logo_match_time)
except UnicodeError:
with open(result_txt, "a+", encoding='utf-8') as f:
result_file_write(f, folder, url, phish_category, pred_target, matched_domain, siamese_conf,
logo_recog_time, logo_match_time)
if phish_category:
os.makedirs(os.path.join(request_dir, folder), exist_ok=True)
cv2.imwrite(os.path.join(request_dir, folder, "predict.png"), plotvis)
================================================
FILE: pixi.toml
================================================
[project]
name = "phishpedia"
channels = ["conda-forge"]
platforms = ["osx-arm64", "linux-64", "win-64"]
[dependencies]
python = ">=3.8"
pip = "*"
setuptools = "*"
wheel = "*"
numpy = "1.23.0"
requests = "*"
scikit-learn = "*"
spacy = "*"
beautifulsoup4 = "*"
matplotlib = "*"
pandas = "*"
nltk = "*"
tqdm = "*"
unidecode = "*"
gdown = "*"
tldextract = "*"
scipy = "*"
pathlib = "*"
fvcore = "*"
lxml = "*"
psutil = "*"
Pillow = "8.4.0"
[pypi-dependencies]
"flask" = "*"
"flask-cors" = "*"
"pycocotools" = "*"
"opencv-python"= "*"
"opencv-contrib-python"= "*"
torch = { version = ">=1.9.0", index = "https://download.pytorch.org/whl/cpu" }
torchvision = { version = ">=0.10.0", index = "https://download.pytorch.org/whl/cpu" }
================================================
FILE: setup.bat
================================================
@echo off
setlocal enabledelayedexpansion
:: ------------------------------------------------------------------------------
:: Initialization and Logging
:: ------------------------------------------------------------------------------
echo [%DATE% %TIME%] Starting setup...
:: ------------------------------------------------------------------------------
:: Tool Checks
:: ------------------------------------------------------------------------------
where pixi >nul 2>nul || (
echo [ERROR] pixi not found. Please install Pixi.
exit /b 1
)
where gdown >nul 2>nul || (
echo [ERROR] gdown not found. Please install gdown (via pixi).
exit /b 1
)
where unzip >nul 2>nul || (
echo [ERROR] unzip not found. Please install unzip utility.
exit /b 1
)
:: ------------------------------------------------------------------------------
:: Setup Directories
:: ------------------------------------------------------------------------------
set "FILEDIR=%cd%"
set "MODELS_DIR=%FILEDIR%\models"
if not exist "%MODELS_DIR%" mkdir "%MODELS_DIR%"
cd /d "%MODELS_DIR%"
:: ------------------------------------------------------------------------------
:: Install Detectron2
:: ------------------------------------------------------------------------------
echo [%DATE% %TIME%] Installing detectron2...
pixi run pip install --no-build-isolation git+https://github.com/facebookresearch/detectron2.git || (
echo [ERROR] Failed to install detectron2.
exit /b 1
)
:: ------------------------------------------------------------------------------
:: File Metadata
:: ------------------------------------------------------------------------------
set RETRY_COUNT=3
:: Model files and Google Drive IDs
set file1=rcnn_bet365.pth
set id1=1tE2Mu5WC8uqCxei3XqAd7AWaP5JTmVWH
set file2=faster_rcnn.yaml
set id2=1Q6lqjpl4exW7q_dPbComcj0udBMDl8CW
set file3=resnetv2_rgb_new.pth.tar
set id3=1H0Q_DbdKPLFcZee8I14K62qV7TTy7xvS
set file4=expand_targetlist.zip
set id4=1fr5ZxBKyDiNZ_1B6rRAfZbAHBBoUjZ7I
set file5=domain_map.pkl
set id5=1qSdkSSoCYUkZMKs44Rup_1DPBxHnEKl1
:: ------------------------------------------------------------------------------
:: Download Loop
:: ------------------------------------------------------------------------------
for /L %%i in (1,1,5) do (
call set "FILENAME=%%file%%i%%"
call set "FILEID=%%id%%i%%"
if exist "!FILENAME!" (
echo [INFO] !FILENAME! already exists. Skipping.
) else (
set /A count=1
:retry_%%i
echo [%DATE% %TIME%] Downloading !FILENAME! (Attempt !count!/%RETRY_COUNT%)...
pixi run gdown --id !FILEID! -O "!FILENAME!" && goto downloaded_%%i
set /A count+=1
if !count! LEQ %RETRY_COUNT% (
timeout /t 2 >nul
goto retry_%%i
) else (
echo [ERROR] Failed to download !FILENAME! after %RETRY_COUNT% attempts.
exit /b 1
)
:downloaded_%%i
)
)
:: ------------------------------------------------------------------------------
:: Extraction
:: ------------------------------------------------------------------------------
echo [%DATE% %TIME%] Extracting expand_targetlist.zip...
unzip -o expand_targetlist.zip -d expand_targetlist || (
echo [ERROR] Failed to unzip file.
exit /b 1
)
:: Flatten nested folder if necessary
cd expand_targetlist
if exist expand_targetlist\ (
echo [INFO] Flattening nested expand_targetlist directory...
move expand_targetlist\*.* . >nul
rmdir expand_targetlist
)
:: ------------------------------------------------------------------------------
:: Done
:: ------------------------------------------------------------------------------
echo [%DATE% %TIME%] [SUCCESS] Model setup and extraction complete.
endlocal
================================================
FILE: setup.sh
================================================
#!/bin/bash
set -euo pipefail # Safer bash behavior
IFS=$'\n\t'
# Install Detectron2
pixi run pip install --no-build-isolation git+https://github.com/facebookresearch/detectron2.git
# Set up model directory
FILEDIR="$(pwd)"
MODELS_DIR="$FILEDIR/models"
mkdir -p "$MODELS_DIR"
cd "$MODELS_DIR"
# Download model files
pixi run gdown --id "1tE2Mu5WC8uqCxei3XqAd7AWaP5JTmVWH" -O "rcnn_bet365.pth"
pixi run gdown --id "1Q6lqjpl4exW7q_dPbComcj0udBMDl8CW" -O "faster_rcnn.yaml"
pixi run gdown --id "1H0Q_DbdKPLFcZee8I14K62qV7TTy7xvS" -O "resnetv2_rgb_new.pth.tar"
pixi run gdown --id "1fr5ZxBKyDiNZ_1B6rRAfZbAHBBoUjZ7I" -O "expand_targetlist.zip"
pixi run gdown --id "1qSdkSSoCYUkZMKs44Rup_1DPBxHnEKl1" -O "domain_map.pkl"
# Extract and flatten expand_targetlist
echo "Extracting expand_targetlist.zip..."
unzip -o expand_targetlist.zip -d expand_targetlist
cd expand_targetlist || error_exit "Extraction directory missing."
if [ -d "expand_targetlist" ]; then
echo "Flattening nested expand_targetlist/ directory..."
mv expand_targetlist/* .
rm -r expand_targetlist
fi
echo "Model setup and extraction complete."
================================================
FILE: utils.py
================================================
import torch.nn.functional as F
import math
def resolution_alignment(img1, img2):
'''
Resize two images according to the minimum resolution between the two
:param img1: first image in PIL.Image
:param img2: second image in PIL.Image
:return: resized img1 in PIL.Image, resized img2 in PIL.Image
'''
w1, h1 = img1.size
w2, h2 = img2.size
w_min, h_min = min(w1, w2), min(h1, h2)
if w_min == 0 or h_min == 0: # something wrong, stop resizing
return img1, img2
if w_min < h_min:
img1_resize = img1.resize((int(w_min), math.ceil(h1 * (w_min / w1)))) # ceiling to prevent rounding to 0
img2_resize = img2.resize((int(w_min), math.ceil(h2 * (w_min / w2))))
else:
img1_resize = img1.resize((math.ceil(w1 * (h_min / h1)), int(h_min)))
img2_resize = img2.resize((math.ceil(w2 * (h_min / h2)), int(h_min)))
return img1_resize, img2_resize
def brand_converter(brand_name):
'''
Helper function to deal with inconsistency in brand naming
'''
brand_tran_dict = {'Adobe Inc.': 'Adobe', 'Adobe Inc': 'Adobe',
'ADP, LLC': 'ADP', 'ADP, LLC.': 'ADP',
'Amazon.com Inc.': 'Amazon', 'Amazon.com Inc': 'Amazon',
'Americanas.com S,A Comercio Electrnico': 'Americanas.com S',
'AOL Inc.': 'AOL', 'AOL Inc': 'AOL',
'Apple Inc.': 'Apple', 'Apple Inc': 'Apple',
'AT&T Inc.': 'AT&T', 'AT&T Inc': 'AT&T',
'Banco do Brasil S.A.': 'Banco do Brasil S.A',
'Credit Agricole S.A.': 'Credit Agricole S.A',
'DGI (French Tax Authority)': 'DGI French Tax Authority',
'DHL Airways, Inc.': 'DHL Airways', 'DHL Airways, Inc': 'DHL Airways', 'DHL': 'DHL Airways',
'Dropbox, Inc.': 'Dropbox', 'Dropbox, Inc': 'Dropbox',
'eBay Inc.': 'eBay', 'eBay Inc': 'eBay',
'Facebook, Inc.': 'Facebook', 'Facebook, Inc': 'Facebook',
'Free (ISP)': 'Free ISP',
'Google Inc.': 'Google', 'Google Inc': 'Google',
'Mastercard International Incorporated': 'Mastercard International',
'Netflix Inc.': 'Netflix', 'Netflix Inc': 'Netflix',
'PayPal Inc.': 'PayPal', 'PayPal Inc': 'PayPal',
'Royal KPN N.V.': 'Royal KPN N.V',
'SF Express Co.': 'SF Express Co',
'SNS Bank N.V.': 'SNS Bank N.V',
'Square, Inc.': 'Square', 'Square, Inc': 'Square',
'Webmail Providers': 'Webmail Provider',
'Yahoo! Inc': 'Yahoo!', 'Yahoo! Inc.': 'Yahoo!',
'Microsoft OneDrive': 'Microsoft', 'Office365': 'Microsoft', 'Outlook': 'Microsoft',
'Global Sources (HK)': 'Global Sources HK',
'T-Online': 'Deutsche Telekom',
'Airbnb, Inc': 'Airbnb, Inc.',
'azul': 'Azul',
'Raiffeisen Bank S.A': 'Raiffeisen Bank S.A.',
'Twitter, Inc': 'Twitter, Inc.', 'Twitter': 'Twitter, Inc.',
'capital_one': 'Capital One Financial Corporation',
'la_banque_postale': 'La Banque postale',
'db': 'Deutsche Bank AG',
'Swiss Post': 'PostFinance', 'PostFinance': 'PostFinance',
'grupo_bancolombia': 'Bancolombia',
'barclays': 'Barclays Bank Plc',
'gov_uk': 'Government of the United Kingdom',
'Aruba S.p.A': 'Aruba S.p.A.',
'TSB Bank Plc': 'TSB Bank Limited',
'strato': 'Strato AG',
'cogeco': 'Cogeco',
'Canada Revenue Agency': 'Government of Canada',
'UniCredit Bulbank': 'UniCredit Bank Aktiengesellschaft',
'ameli_fr': 'French Health Insurance',
'Banco de Credito del Peru': 'bcp'
}
# find the value in the dict else return the origin brand name
tran_brand_name = brand_tran_dict.get(brand_name, None)
if tran_brand_name:
return tran_brand_name
else:
return brand_name
def l2_norm(x):
"""
l2 normalization
:param x:
:return:
"""
if len(x.shape):
x = x.reshape((x.shape[0], -1))
return F.normalize(x, p=2, dim=1)
gitextract_38x9q4gh/ ├── .github/ │ └── workflows/ │ ├── codeql.yml │ ├── lint.yml │ └── pytest.yml ├── .gitignore ├── LICENSE ├── Plugin_for_Chrome/ │ ├── README.md │ ├── client/ │ │ ├── background.js │ │ ├── manifest.json │ │ └── popup/ │ │ ├── popup.css │ │ ├── popup.html │ │ └── popup.js │ └── server/ │ └── app.py ├── README.md ├── WEBtool/ │ ├── app.py │ ├── phishpedia_web.py │ ├── readme.md │ ├── static/ │ │ ├── css/ │ │ │ ├── sidebar.css │ │ │ └── style.css │ │ └── js/ │ │ ├── main.js │ │ └── sidebar.js │ ├── templates/ │ │ └── index.html │ └── utils_web.py ├── configs.py ├── configs.yaml ├── datasets/ │ └── test_sites/ │ └── accounts.g.cdcde.com/ │ ├── html.txt │ └── info.txt ├── logo_matching.py ├── logo_recog.py ├── models.py ├── phishpedia.py ├── pixi.toml ├── setup.bat ├── setup.sh └── utils.py
SYMBOL INDEX (74 symbols across 13 files)
FILE: Plugin_for_Chrome/client/background.js
function captureTabInfo (line 2) | async function captureTabInfo(tab) {
FILE: Plugin_for_Chrome/server/app.py
function analyze (line 22) | def analyze():
FILE: WEBtool/app.py
function analyze (line 22) | def analyze():
FILE: WEBtool/phishpedia_web.py
function index (line 21) | def index():
function upload_file (line 27) | def upload_file():
function uploaded_file (line 55) | def uploaded_file(filename):
function delete_image (line 61) | def delete_image():
function detect (line 82) | def detect():
function get_file_tree (line 120) | def get_file_tree():
function view_file (line 158) | def view_file():
function add_logo (line 175) | def add_logo():
function del_logo (line 207) | def del_logo():
function add_brand (line 234) | def add_brand():
function del_brand (line 259) | def del_brand():
function reload_model (line 282) | def reload_model():
FILE: WEBtool/static/js/main.js
method data (line 3) | data() {
method startDetection (line 13) | startDetection() {
method handleImageUpload (line 88) | handleImageUpload(event) { // 处理图片上传事件
method uploadImage (line 95) | uploadImage() { // 上传图片到服务器
method clearUpload (line 117) | clearUpload() { // 清除上传的图像
FILE: WEBtool/static/js/sidebar.js
method data (line 4) | data() {
method mounted (line 15) | mounted() {
method renderFileTree (line 45) | renderFileTree(directory, parentPath = '') {
method fetchFileTree (line 104) | fetchFileTree() {
method selectDirectory (line 124) | selectDirectory(event, directoryName) {
method selectFile (line 143) | selectFile(event, fileName, parentPath) {
method addBrand (line 162) | addBrand() {
method closeAddBrandForm (line 167) | closeAddBrandForm() {
method submitAddBrandForm (line 174) | submitAddBrandForm() {
method delBrand (line 206) | delBrand() {
method addLogo (line 233) | addLogo() {
method handleLogoFileSelect (line 242) | handleLogoFileSelect(event) {
method delLogo (line 269) | delLogo() {
method reloadModel (line 297) | async reloadModel() {
method clearSelected (line 317) | clearSelected() {
FILE: WEBtool/utils_web.py
function check_port_inuse (line 12) | def check_port_inuse(port, host):
function allowed_file (line 25) | def allowed_file(filename):
function initial_upload_folder (line 30) | def initial_upload_folder(upload_folder):
function convert_to_base64 (line 38) | def convert_to_base64(image_array):
function domain_map_add (line 50) | def domain_map_add(brand_name, domains_str, domain_map_path):
function domain_map_delete (line 77) | def domain_map_delete(brand_name, domain_map_path):
FILE: configs.py
function get_absolute_path (line 9) | def get_absolute_path(relative_path):
function load_config (line 14) | def load_config(reload_targetlist=False):
FILE: logo_matching.py
function check_domain_brand_inconsistency (line 276) | def check_domain_brand_inconsistency(logo_boxes,
function load_model_weights (line 330) | def load_model_weights(num_classes: int, weights_path: str):
function cache_reference_list (line 357) | def cache_reference_list(model, targetlist_path: str, grayscale=False):
function get_embedding (line 394) | def get_embedding(img, model, grayscale=False):
function chunked_dot (line 439) | def chunked_dot(logo_feat_list, img_feat, chunk_size=128):
function pred_brand (line 450) | def pred_brand(model, domain_map, logo_feat_list, file_name_list, shot_p...
FILE: logo_recog.py
function pred_rcnn (line 8) | def pred_rcnn(im, predictor):
function config_rcnn (line 35) | def config_rcnn(cfg_path, weights_path, conf_threshold):
function vis (line 65) | def vis(img_path, pred_boxes):
FILE: models.py
class StdConv2d (line 25) | class StdConv2d(nn.Conv2d):
method forward (line 27) | def forward(self, x):
function conv3x3 (line 35) | def conv3x3(cin, cout, stride=1, groups=1, bias=False):
function conv1x1 (line 40) | def conv1x1(cin, cout, stride=1, bias=False):
function tf2th (line 45) | def tf2th(conv_weights):
class PreActBottleneck (line 52) | class PreActBottleneck(nn.Module):
method __init__ (line 61) | def __init__(self, cin, cout=None, cmid=None, stride=1):
method forward (line 78) | def forward(self, x):
method load_from (line 93) | def load_from(self, weights, prefix=''):
class ResNetV2 (line 110) | class ResNetV2(nn.Module):
method __init__ (line 113) | def __init__(self, block_units, width_factor, head_size=21843, zero_he...
method features (line 159) | def features(self, x):
method forward (line 164) | def forward(self, x):
method load_from (line 169) | def load_from(self, weights, prefix='resnet/'):
FILE: phishpedia.py
function result_file_write (line 17) | def result_file_write(f, folder, url, phish_category, pred_target, match...
class PhishpediaWrapper (line 29) | class PhishpediaWrapper:
method __init__ (line 33) | def __init__(self):
method _load_config (line 36) | def _load_config(self):
method test_orig_phishpedia (line 42) | def test_orig_phishpedia(self, url, screenshot_path, html_path):
FILE: utils.py
function resolution_alignment (line 5) | def resolution_alignment(img1, img2):
function brand_converter (line 26) | def brand_converter(brand_name):
function l2_norm (line 86) | def l2_norm(x):
Condensed preview — 34 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (143K chars).
[
{
"path": ".github/workflows/codeql.yml",
"chars": 4294,
"preview": "# For most projects, this workflow file will not need changing; you simply need\n# to commit it to your repository.\n#\n# Y"
},
{
"path": ".github/workflows/lint.yml",
"chars": 445,
"preview": "name: flake8 Lint\n\non: [push, pull_request]\n\njobs:\n flake8-lint:\n runs-on: ubuntu-latest\n name: Lint\n steps:\n "
},
{
"path": ".github/workflows/pytest.yml",
"chars": 1841,
"preview": "name: Pytest CI\n\non:\n push:\n branches: [ main ]\n pull_request:\n branches: [ main ]\n\njobs:\n build:\n runs-on: "
},
{
"path": ".gitignore",
"chars": 37,
"preview": "*.zip\n*.pkl\n*.pth*\nvenv/\n__pycache__/"
},
{
"path": "LICENSE",
"chars": 7048,
"preview": "Creative Commons Legal Code\n\nCC0 1.0 Universal\n\n CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE\n"
},
{
"path": "Plugin_for_Chrome/README.md",
"chars": 2088,
"preview": "# Plugin_for_Chrome\n\n## Project Overview\n\n`Plugin_for_Chrome` is a Chrome extension project designed to detect phishing "
},
{
"path": "Plugin_for_Chrome/client/background.js",
"chars": 1382,
"preview": "// 处理截图和URL获取\nasync function captureTabInfo(tab) {\n try {\n // 获取截图\n const screenshot = await chrome.tabs.captureV"
},
{
"path": "Plugin_for_Chrome/client/manifest.json",
"chars": 631,
"preview": "{\n \"manifest_version\": 3,\n \"name\": \"Phishing Detector\",\n \"version\": \"1.0\",\n \"description\": \"Detect phishing websites"
},
{
"path": "Plugin_for_Chrome/client/popup/popup.css",
"chars": 626,
"preview": ".container {\n width: 300px;\n padding: 16px;\n }\n \n h1 {\n font-size: 18px;\n margin-bottom: 16px;\n }\n \n b"
},
{
"path": "Plugin_for_Chrome/client/popup/popup.html",
"chars": 656,
"preview": "<!DOCTYPE html>\n<html>\n<head>\n <meta charset=\"UTF-8\">\n <link rel=\"stylesheet\" href=\"popup.css\">\n</head>\n<body>\n <div "
},
{
"path": "Plugin_for_Chrome/client/popup/popup.js",
"chars": 1571,
"preview": "document.addEventListener('DOMContentLoaded', () => {\n const analyzeBtn = document.getElementById('analyzeBtn');\n cons"
},
{
"path": "Plugin_for_Chrome/server/app.py",
"chars": 3011,
"preview": "from flask import Flask, request, jsonify\nfrom flask_cors import CORS\nimport base64\nfrom io import BytesIO\nfrom PIL impo"
},
{
"path": "README.md",
"chars": 5691,
"preview": "# Phishpedia A Hybrid Deep Learning Based Approach to Visually Identify Phishing Webpages\n\n<div align=\"center\">\n\n![Dialo"
},
{
"path": "WEBtool/app.py",
"chars": 3011,
"preview": "from flask import Flask, request, jsonify\nfrom flask_cors import CORS\nimport base64\nfrom io import BytesIO\nfrom PIL impo"
},
{
"path": "WEBtool/phishpedia_web.py",
"chars": 11177,
"preview": "import os\nimport shutil\nfrom flask import request, Flask, jsonify, render_template, send_from_directory\nfrom flask_cors "
},
{
"path": "WEBtool/readme.md",
"chars": 2499,
"preview": "# Phishpedia Web Tool\n\nThis is a web tool for Phishpedia which provides a user-friendly interface with brand and domain "
},
{
"path": "WEBtool/static/css/sidebar.css",
"chars": 5974,
"preview": "/* 侧边栏样式 */\n.sidebar {\n position: fixed;\n top: 0;\n right: -400px;\n width: 300px;\n height: 100%;\n backg"
},
{
"path": "WEBtool/static/css/style.css",
"chars": 8126,
"preview": "body,\nhtml {\n margin: 0;\n padding: 0;\n font-family: Arial, sans-serif;\n background-color: #faf4f2;\n}\n\nul {\n "
},
{
"path": "WEBtool/static/js/main.js",
"chars": 6690,
"preview": "new Vue({\n el: '#main-container',\n data() {\n return {\n url: '',\n result: null,\n "
},
{
"path": "WEBtool/static/js/sidebar.js",
"chars": 11443,
"preview": "// sidebar.js\nnew Vue({\n el: '#sidebar',\n data() {\n return {\n selectedDirectory: null, // 记录当前选中"
},
{
"path": "WEBtool/templates/index.html",
"chars": 7474,
"preview": "<!DOCTYPE html>\n<html lang=\"en\">\n\n<head>\n <meta charset=\"UTF-8\">\n <meta name=\"viewport\" content=\"width=device-widt"
},
{
"path": "WEBtool/utils_web.py",
"chars": 2630,
"preview": "# help function for phishpedia web app\nimport os\nimport pickle\nimport shutil\nimport socket\nimport base64\nimport io\nfrom "
},
{
"path": "configs.py",
"chars": 3023,
"preview": "# Global configuration\nimport yaml\nfrom logo_matching import cache_reference_list, load_model_weights\nfrom logo_recog im"
},
{
"path": "configs.yaml",
"chars": 515,
"preview": "ELE_MODEL: # element recognition model -- logo only\n CFG_PATH: models/faster_rcnn.yaml # os.path.join(os.path.dirname(_"
},
{
"path": "datasets/test_sites/accounts.g.cdcde.com/html.txt",
"chars": 0,
"preview": ""
},
{
"path": "datasets/test_sites/accounts.g.cdcde.com/info.txt",
"chars": 0,
"preview": ""
},
{
"path": "logo_matching.py",
"chars": 14459,
"preview": "from PIL import Image, ImageOps\nfrom torchvision import transforms\nfrom utils import brand_converter, resolution_alignme"
},
{
"path": "logo_recog.py",
"chars": 2386,
"preview": "from detectron2.config import get_cfg\nfrom detectron2.engine import DefaultPredictor\nimport cv2\nimport numpy as np\nimpor"
},
{
"path": "models.py",
"chars": 8750,
"preview": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this f"
},
{
"path": "phishpedia.py",
"chars": 6250,
"preview": "import time\nfrom datetime import datetime\nimport argparse\nimport os\nimport torch\nimport cv2\nfrom configs import load_con"
},
{
"path": "pixi.toml",
"chars": 732,
"preview": "[project]\nname = \"phishpedia\"\nchannels = [\"conda-forge\"]\nplatforms = [\"osx-arm64\", \"linux-64\", \"win-64\"]\n\n[dependencies]"
},
{
"path": "setup.bat",
"chars": 3764,
"preview": "@echo off\nsetlocal enabledelayedexpansion\n\n:: --------------------------------------------------------------------------"
},
{
"path": "setup.sh",
"chars": 1122,
"preview": "#!/bin/bash\n\nset -euo pipefail # Safer bash behavior\nIFS=$'\\n\\t'\n\n# Install Detectron2\npixi run pip install --no-build-"
},
{
"path": "utils.py",
"chars": 4660,
"preview": "import torch.nn.functional as F\nimport math\n\n\ndef resolution_alignment(img1, img2):\n '''\n Resize two images accord"
}
]
About this extraction
This page contains the full source code of the lindsey98/Phishpedia GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 34 files (130.9 KB), approximately 34.0k tokens, and a symbol index with 74 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.