Repository: Barqawiz/Shakkala
Branch: master
Commit: a01b2e81a281
Files: 17
Total size: 28.8 MB
Directory structure:
gitextract_s4jr2zrj/
├── . gitignore
├── .github/
│ └── FUNDING.yml
├── LICENSE.md
├── PIP_README.md
├── README.md
├── requirements/
│ ├── publish_commands.txt
│ └── requirements.txt
├── setup.py
└── shakkala/
├── Shakkala.py
├── __init__.py
├── demo.py
├── dictionary/
│ ├── input_vocab_to_int.pickle
│ └── output_int_to_vocab.pickle
├── helper.py
└── model/
├── middle_model.h5
├── second_model6.h5
└── simple_model.h5
================================================
FILE CONTENTS
================================================
================================================
FILE: . gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
================================================
FILE: .github/FUNDING.yml
================================================
# These are supported funding model platforms
github: Barqawiz # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
================================================
FILE: LICENSE.md
================================================
The MIT License (MIT)
Copyright (c) 2017 Shakkala Project
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
================================================
FILE: PIP_README.md
================================================
# Shakkala Project V 2.1 مشروع شكّالة
## Introduction
The Shakkala project presents a recurrent neural network for Arabic text vocalization that automatically forms Arabic characters (تشكيل الحروف) to enhance text-to-speech systems. The model can also be used in other applications such as improving search results. In the beta version, the model was trained on over a million sentences, including a majority of historical Arabic data from books and some modern data from the internet. The accuracy of the model reached up to 95%, and in some data sets it achieved even higher levels of accuracy depending on complexity and data distribution. This innovative approach has the potential to significantly improve the quality of writing and text-to-speech systems for the Arabic language.
## Requirements
```
pip install shakkala
```
Note: Shakkala has been tested with Tensorflow 2.9.3.
## Code Examples (How to)
Check full example in (demo.py) file.
0. Import
```
from shakkala import Shakkala
```
1. Create Shakkala object
```
sh = Shakkala()
```
OR for advanced usage:
```
sh = Shakkala(version={version_num})
```
2. Prepare input
```
input_text = "فإن لم يكونا كذلك أتى بما يقتضيه الحال وهذا أولى"
input_int = sh.prepare_input(input_text)
```
3. Call the neural network
```
model, graph = sh.get_model()
logits = model.predict(input_int)[0]
```
4. Predict output
```
predicted_harakat = sh.logits_to_text(logits)
final_output = sh.get_final_text(input_text, predicted_harakat)
```
Available models:
- version_num=1: First test of the solution.
- version_num=2: Main release version.
- version_num=3: Some enhancements from version number 2.
It worth to try both version_num=2 and version_num=3.
## Perfomance Tips
Shakkala built in object oriented way to load the model once into memory for faster prediction, to make sure you dont load it multiple times in your service or application follow the steps:
- Load the model in global variable:
```
sh = Shakkala(folder_location, version={version_num})
model, graph = sh.get_model()
```
- Then inside your request function or loop add:
```
input_int = sh.prepare_input(input_text)
logits = model.predict(input_int)[0]
predicted_harakat = sh.logits_to_text(logits)
final_output = sh.get_final_text(input_text, predicted_harakat)
```
## Accuracy
In this beta version 2 accuracy reached up to 95% and in some data it reach more based on complexity and data disribution.
This beta version trained on more than million sentences with majority of historical Arabic data from books and **some of** available formed modern data in the internet.
### Prediction Example
For live demo based on Shakkala library click the [link](http://ahmadai.com/shakkala/)
| Real output | Predicted output |
| ------------- | ---------------- |
| فَإِنْ لَمْ يَكُونَا كَذَلِكَ أَتَى بِمَا يَقْتَضِيهِ الْحَالُ وَهَذَا أَوْلَى | فَإِنْ لَمْ يَكُونَا كَذَلِكَ أَتَى بِمَا يَقْتَضِيهِ الْحَالُ وَهَذَا أَوْلَى |
| قَالَ الْإِسْنَوِيُّ وَسَوَاءٌ فِيمَا قَالُوهُ مَاتَ فِي حَيَاةِ أَبَوَيْهِ أَمْ لَا | قَالَ الْإِسْنَوِيُّ وَسَوَاءٌ فِيمَا قَالُوهُ مَاتَ فِي حَيَاةِ أَبَوَيْهِ أَمْ لَا |
| طَابِعَةٌ ثُلَاثِيَّةُ الْأَبْعَاد | طَابِعَةٌ ثَلَاثِيَّةُ الْأَبْعَادِ |
### Accuracy Enhancements
The model can be enhanced to reach more than 95% accuracy with following:
- Availability of more formed **modern** data to train the network. (because current version trained with mostly available historical Arabic data and some modern data)
- Stack different models
## References
- A paper compare different arabic text diacritization models and show that shakkala is the best among available neural networks for this solution:
[Arabic Text Diacritization Using Deep Neural Networks, 2019](https://arxiv.org/abs/1905.01965)
## Citation
For academic work use
```
Shakkala, Arabic text vocalization, Barqawi & Zerrouki
```
OR bibtex format
```
@misc{
title={Shakkala, Arabic text vocalization},
author={Barqawi, Zerrouki},
url={https://github.com/Barqawiz/Shakkala},
year={2017}
}
```
## Contribution
### Core Team
1. Ahmad Barqawi: Neural Network Developer.
2. Taha Zerrouki: Mentor Data and Results.
### Contributors
1. Zaid Farekh & propellerinc.me: Provide infrastructure and consultation support.
2. Mohammad Issam Aklik: Artist.
3. Brahim Sidi: Form new sentences.
4. Fadi Bakoura: Aggregate online content.
5. Ola Ghanem: Testing.
6. Ali Hamdi Ali Fadel: Contribute code.
License
-------
Free to use and distribute only mention the original project name Shakkala as base model.
The MIT License (MIT)
Copyright (c) 2017 Shakkala Project
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
================================================
FILE: README.md
================================================
# Shakkala Project مشروع شكّالة
[](https://badge.fury.io/py/shakkala)
## Introduction
The Shakkala project presents a recurrent neural network for Arabic text vocalization that automatically forms Arabic characters (تشكيل الحروف) to enhance text-to-speech systems. The model can also be used in other applications such as improving search results. In the beta version, the model was trained on over a million sentences, including a majority of historical Arabic data from books and some modern data from the internet. The accuracy of the model reached up to 95%, and in some data sets it achieved even higher levels of accuracy depending on complexity and data distribution. This innovative approach has the potential to significantly improve the quality of writing and text-to-speech systems for the Arabic language.
## Requirements
### Easy setup
No GitHub repository installation is needed for [pip](https://pypi.org/project/shakkala/) case:
```
pip install shakkala
```
### Project setup
Execute the source code from Github:
```
cd requirements
pip install -r requirements.txt
cd ..
```
Note: Shakkala has been tested with Tensorflow 2.9.3.
## Code Examples (How to)
Check full example in (demo.py) file.
0. Import
```
from shakkala import Shakkala
```
1. Create Shakkala object
```
sh = Shakkala()
```
OR for advanced usage:
```
sh = Shakkala(version={version_num})
```
2. Prepare input
```
input_text = "فإن لم يكونا كذلك أتى بما يقتضيه الحال وهذا أولى"
input_int = sh.prepare_input(input_text)
```
3. Call the neural network
```
model, graph = sh.get_model()
logits = model.predict(input_int)[0]
```
4. Predict output
```
predicted_harakat = sh.logits_to_text(logits)
final_output = sh.get_final_text(input_text, predicted_harakat)
```
Available models:
- version_num=1: First test of the solution.
- version_num=2: Main release version.
- version_num=3: Some enhancements from version number 2.
It worth to try both version_num=2 and version_num=3.
### Demo run
The fastest way to start with Shakkala by running the demo from Github:
```
python demo.py
```
## Perfomance Tips
Shakkala built in object oriented way to load the model once into memory for faster prediction, to make sure you dont load it multiple times in your service or application follow the steps:
- Load the model in global variable:
```
sh = Shakkala(folder_location, version={version_num})
model, graph = sh.get_model()
```
- Then inside your request function or loop add:
```
input_int = sh.prepare_input(input_text)
logits = model.predict(input_int)[0]
predicted_harakat = sh.logits_to_text(logits)
final_output = sh.get_final_text(input_text, predicted_harakat)
```
## Accuracy
In this beta version 2 accuracy reached up to 95% and in some data it reach more based on complexity and data disribution.
This beta version trained on more than million sentences with majority of historical Arabic data from books and **some of** available formed modern data in the internet.
### Prediction Example
For live demo based on Shakkala library click the [link](http://ahmadai.com/shakkala/)
| Real output | Predicted output |
| ------------- | ---------------- |
| فَإِنْ لَمْ يَكُونَا كَذَلِكَ أَتَى بِمَا يَقْتَضِيهِ الْحَالُ وَهَذَا أَوْلَى | فَإِنْ لَمْ يَكُونَا كَذَلِكَ أَتَى بِمَا يَقْتَضِيهِ الْحَالُ وَهَذَا أَوْلَى |
| قَالَ الْإِسْنَوِيُّ وَسَوَاءٌ فِيمَا قَالُوهُ مَاتَ فِي حَيَاةِ أَبَوَيْهِ أَمْ لَا | قَالَ الْإِسْنَوِيُّ وَسَوَاءٌ فِيمَا قَالُوهُ مَاتَ فِي حَيَاةِ أَبَوَيْهِ أَمْ لَا |
| طَابِعَةٌ ثُلَاثِيَّةُ الْأَبْعَاد | طَابِعَةٌ ثَلَاثِيَّةُ الْأَبْعَادِ |
### Accuracy Enhancements
The model can be enhanced to reach more than 95% accuracy with following:
- Availability of more formed **modern** data to train the network. (because current version trained with mostly available historical Arabic data and some modern data)
- Stack different models
## Model Design
## References
- A paper compare different arabic text diacritization models and show that shakkala is the best among available neural networks for this solution:
[Arabic Text Diacritization Using Deep Neural Networks, 2019](https://arxiv.org/abs/1905.01965)
## Citation
For academic work use
```
Shakkala, Arabic text vocalization, Barqawi & Zerrouki
```
OR bibtex format
```
@misc{
title={Shakkala, Arabic text vocalization},
author={Barqawi, Zerrouki},
url={https://github.com/Barqawiz/Shakkala},
year={2017}
}
```
## Contribution
### Core Team
1. Ahmad Barqawi: Neural Network Developer.
2. Taha Zerrouki: Mentor Data and Results.
### Contributors
1. Zaid Farekh & propellerinc.me: Provide infrastructure and consultation support.
2. Mohammad Issam Aklik: Artist.
3. Brahim Sidi: Form new sentences.
4. Fadi Bakoura: Aggregate online content.
5. Ola Ghanem: Testing.
6. Ali Hamdi Ali Fadel: Contribute code.
License
-------
Free to use and distribute only mention the original project name Shakkala as base model.
The MIT License (MIT)
Copyright (c) 2017 Shakkala Project
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
================================================
FILE: requirements/publish_commands.txt
================================================
python setup.py sdist
python setup.py bdist_wheel
twine upload dist/*
================================================
FILE: requirements/requirements.txt
================================================
click==8.1.3
h5py==3.8.0
html5lib==1.1
Markdown==3.4.1
nltk==3.6.6
numpy==1.24.1
oauthlib==3.2.2
opt-einsum==3.3.0
packaging==23.0
regex==2022.10.31
six==1.16.0
tensorflow==2.9.3
urllib3==1.26.14
webencodings==0.5.1
Werkzeug==2.2.2
wrapt==1.14.1
================================================
FILE: setup.py
================================================
from setuptools import setup, find_packages
with open("PIP_README.md", "r") as fh:
long_description = fh.read()
setup(
name='shakkala',
version='1.7',
author='Ahmad Albarqawi',
packages=find_packages(),
include_package_data=True,
url='https://ahmadai.com/shakkala/',
data_files=[('dictionary', ['shakkala/dictionary/input_vocab_to_int.pickle',
'shakkala/dictionary/output_int_to_vocab.pickle']),
('model', ['shakkala/model/middle_model.h5',
'shakkala/model/second_model6.h5',
'shakkala/model/simple_model.h5'])],
description="Deep learning for Arabic text Vocalization - التشكيل الالي للنصوص العربية",
long_description=long_description,
long_description_content_type="text/markdown",
install_requires=[ 'tensorflow==2.9.3', 'h5py==3.8.0', 'nltk==3.6.6', 'numpy==1.24.1', 'click==8.1.3' ],
)
================================================
FILE: shakkala/Shakkala.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
License
-------
The MIT License (MIT)
Copyright (c) 2017 Tashkel Project
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
Created on Sat Dec 16 22:46:28 2017
@author: Ahmad Barqawi
"""
from . import helper
import os
import tensorflow as tf
from tensorflow.compat.v1.keras.models import Model
from tensorflow.compat.v1.keras.models import load_model
from tensorflow.compat.v1.keras.optimizers import Adam
from tensorflow.compat.v1.keras.losses import sparse_categorical_crossentropy
from tensorflow.compat.v1.keras.preprocessing.sequence import pad_sequences
import numpy as np
class Shakkala:
# intial
#max_sentence = 495
def __init__(self, folder_location=None, version=3):
if folder_location is None:
folder_location = os.path.dirname(os.path.abspath(__file__))
assert folder_location != None, "model_location cant be empty, send location of keras model"
model_folder = os.path.join(folder_location, 'model')
if version == 1:
self.max_sentence = 495
self.model_location = os.path.join(model_folder, ('simple_model' + '.h5'))
elif version == 2:
self.max_sentence = 315
self.model_location = os.path.join(model_folder, ('middle_model' + '.h5'))
elif version == 3:
self.max_sentence = 315
self.model_location = os.path.join(model_folder, ('second_model6' + '.h5'))
dictionary_folder = os.path.join(folder_location, 'dictionary')
input_vocab_to_int = helper.load_binary('input_vocab_to_int',dictionary_folder)
output_int_to_vocab = helper.load_binary('output_int_to_vocab',dictionary_folder)
self.dictionary = {
"input_vocab_to_int":input_vocab_to_int,
"output_int_to_vocab":output_int_to_vocab}
# model
def get_model(self):
print('start load model')
model = load_model(self.model_location)
print('end load model')
graph = tf.compat.v1.get_default_graph()
return model, graph
# input processing
def prepare_input(self, input_sent):
assert input_sent != None and len(input_sent) < self.max_sentence, \
"max length for input_sent should be {} characters, you can split the sentence into multiple sentecens and call the function".format(self.max_sentence)
input_sent = [input_sent]
return self.__preprocess(input_sent)
def __preprocess(self, input_sent):
input_vocab_to_int = self.dictionary["input_vocab_to_int"]
input_letters_ids = [[input_vocab_to_int.get(ch, input_vocab_to_int['']) for ch in sent] for sent in input_sent]
input_letters_ids = self.__pad_size(input_letters_ids, self.max_sentence)
return input_letters_ids
# output processing
def logits_to_text(self, logits):
text = []
for prediction in np.argmax(logits, 1):
if self.dictionary['output_int_to_vocab'][prediction] == '':
continue
text.append(self.dictionary['output_int_to_vocab'][prediction])
return text
def get_final_text(self,input_sent, output_sent):
return helper.combine_text_with_harakat(input_sent, output_sent)
def clean_harakat(self, input_sent):
return helper.clear_tashkel(input_sent)
# common
def __pad_size(self, x, length=None):
return pad_sequences(x, maxlen=length, padding='post')
================================================
FILE: shakkala/__init__.py
================================================
from .Shakkala import Shakkala
================================================
FILE: shakkala/demo.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Example code using Shakkala library
"""
import os
from Shakkala import Shakkala
if __name__ == "__main__":
input_text = "فإن لم يكونا كذلك أتى بما يقتضيه الحال وهذا أولى"
folder_location = './'
# create Shakkala object
sh = Shakkala(folder_location, version=3)
# prepare input
input_int = sh.prepare_input(input_text)
print("finished preparing input")
print("start with model")
model, graph = sh.get_model()
# with graph.as_default():
logits = model.predict(input_int)[0]
print("prepare and print output")
predicted_harakat = sh.logits_to_text(logits)
final_output = sh.get_final_text(input_text, predicted_harakat)
print(final_output)
print("finished successfully")
================================================
FILE: shakkala/helper.py
================================================
"""
License
-------
The MIT License (MIT)
Copyright (c) 2017 Tashkel Project
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
Created on Sat Dec 16 22:46:28 2017
@author: Ahmad Barqawi
"""
import os
import glob
import string
import re
import pickle
from nltk.tokenize import sent_tokenize, word_tokenize
#convert using chr(harakat[0])
harakat = [1614,1615,1616,1618,1617,1611,1612,1613]
connector = 1617
def save_binary(data, file, folder):
location = os.path.join(folder, (file+'.pickle') )
with open(location, 'wb') as ff:
pickle.dump(data, ff, protocol=pickle.HIGHEST_PROTOCOL)
def load_binary(file, folder):
location = os.path.join(folder, (file+'.pickle') )
with open(location, 'rb') as ff:
data = pickle.load(ff)
return data
def get_sentences(data):
return [sent for line in re.split("[\n,،]+", data) if line for sent in sent_tokenize(line.strip()) if sent]
#return [sent for line in data.split('\n') if line for sent in sent_tokenize(line) if sent]
def clear_punctuations(text):
text = "".join(c for c in text if c not in string.punctuation)
return text
def clear_english_and_numbers(text):
text = re.sub(r"[a-zA-Z0-9٠-٩]", " ", text);
return text
def is_tashkel(text):
return any(ord(ch) in harakat for ch in text)
def clear_tashkel(text):
text = "".join(c for c in text if ord(c) not in harakat)
return text
def get_harakat():
return "".join(chr(item)+"|" for item in harakat)[:-1]
def get_taskel(sentence):
output = []
current_haraka = ""
for ch in reversed(sentence):
if ord(ch) in harakat:
if (current_haraka == "") or\
(ord(ch) == connector and chr(connector) not in current_haraka) or\
(chr(connector) == current_haraka):
current_haraka += ch
else:
if current_haraka == "":
current_haraka = "ـ"
output.insert(0, current_haraka)
current_haraka = ""
return output
def combine_text_with_harakat(input_sent, output_sent):
#print("input : " , len(input_sent))
#print("output : " , len(output_sent))
"""
harakat_stack = Stack()
temp_stack = Stack()
#process harakat
for character, haraka in zip(input_sent, output_sent):
temp_stack = Stack()
haraka = haraka.replace("","").replace("","").replace("ـ","")
if (character == " " and haraka != "" and ord(haraka) == connector):
combine = harakat_stack.pop()
combine += haraka
harakat_stack.push(combine)
else:
harakat_stack.push(haraka)
"""
#fix combine differences
input_length = len(input_sent)
output_length = len(output_sent) # harakat_stack.size()
for index in range(0,(input_length-output_length)):
output_sent.append("")
#combine with text
text = ""
for character, haraka in zip(input_sent, output_sent):
if haraka == '' or haraka == 'ـ':
haraka = ''
text += character + "" + haraka
return text
class Stack:
def __init__(self):
self.stack = []
def isEmpty(self):
return self.size() == 0
def push(self, item):
self.stack.append(item)
def pop(self):
return self.stack.pop()
def peek(self):
if self.size() == 0:
return None
else:
return self.stack[len(self.stack)-1]
def size(self):
return len(self.stack)
def to_array(self):
return self.stack
================================================
FILE: shakkala/model/second_model6.h5
================================================
[File too large to display: 28.7 MB]