Repository: yanghuan/proton Branch: master Commit: e19e26a4c727 Files: 20 Total size: 56.7 KB Directory structure: gitextract_2lhivfiy/ ├── .gitignore ├── LICENSE ├── README.md ├── nested_parser.py ├── proton.py ├── raw/ │ ├── en/ │ │ ├── sample.xlsx │ │ └── sample2.xlsx │ └── zh/ │ ├── sample.xlsx │ └── sample2.xlsx └── sample/ ├── README.md ├── __export.bat ├── __export.py ├── complex_nested_obj.xlsx ├── hero.xlsx ├── mount.xlsx ├── text.xlsx └── tools/ ├── CSharpGeneratorForProton/ │ └── README.md └── py37/ ├── README.md └── sxl/ ├── __init__.py └── sxl.py ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ *.pyc ================================================ FILE: LICENSE ================================================ Copyright 2016 YANG Huan (sy.yanghuan@gmail.com) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================================================ FILE: README.md ================================================ [English](https://github.com/yanghuan/proton#proton) [Chinese](https://github.com/yanghuan/proton#proton-1) # proton Proton is a excel export configuration file for the tool, you can export to xml, json, lua format, through external expansion can automatically generate the configuration to read the code, simple and flexible easy to use, indeed powerful. ## Features - Writeing in Python,cross-platform, referenced [sxl](https://pypi.org/project/sxl/) only, [full code](https://github.com/yanghuan/proton/blob/master/proton.py), just more than 600 lines - Has a specific rule syntax description excel format information, simple and easy to understand, flexible and powerful, [detailed description](https://github.com/yanghuan/proton/wiki/document_en) - Can export excel format information for external use, can be used to automatically generate read configuration code ## Generates an auto-read code Use the "-c" parameter to generate a json file containing excel format information, each language can be automatically generated to achieve this code to read the tool, [the specific format](https://github.com/yanghuan/proton/wiki/schema_en). Has achieved the C # language tools, other language users, can be realized, welcomed the realization of the code links for the needs of people to use. - [CSharpGeneratorForProton](https://github.com/yanghuan/CSharpGeneratorForProton) generates C # code that reads xml, json, protobuf. You can convert xml, json to protobuf's binary format and generate the corresponding read code (using protobuf-net). ## Example [sample directory](https://github.com/yanghuan/proton/tree/master/sample) is a well configured under the direct use of the Windows example. Already contains a python3 environment, directly run __export.bat to complete the export. Need to add a new Excel file, modify the __export.py related array. ## Command Line Parameters ```cmd usage python proton.py [-p filelist] [-f outfolder] [-e format] Arguments -p : input excel files, use , or ; or space to separate -f : out folder -e : format, json or xml or lua Options -s :sign, controls whether the column is exported, defalut all export -t : suffix, export file suffix -r : the separator of object field, default is ; you can use it to change -m : use the count of multiprocesses to export, default is cpu count -c : a file path, save the excel structure to json, the external program uses this file to automatically generate the read code -h : print this help message and exit -x : don't append 's' on names ``` ## Documentation Wiki https://github.com/yanghuan/proton/wiki/document_en FAQ https://github.com/yanghuan/proton/wiki/FAQ_en ## *License* [Apache 2.0 license](https://github.com/yanghuan/proton/blob/master/LICENSE). _____________________ # proton proton是一个将excel导出为配置文件的工具,可以导出为xml、json、lua格式,通过外部扩展可支持自动生成读取配置的代码,简单灵活易于使用,确不失强大。 ## 特点 - python编写可跨平台使用,仅依赖第三方库[sxl](https://pypi.org/project/sxl/),[完整代码仅600余行](https://github.com/yanghuan/proton/blob/master/proton.py)。 - 有特定的规则语法描述excel的格式信息,简洁易懂,灵活强大,[详细说明](https://github.com/yanghuan/proton/wiki/document_zh)。 - 可导出excel格式信息供外部程序使用,可用来自动生成读取配置的代码。 ## 后端程序(生成自动读取的代码) 使用“-c”参数可生成内含excel格式信息的json文件,各个语言可据此实现自动生成读取代码的工具,[具体格式说明](https://github.com/yanghuan/proton/wiki/schema_zh)。已经实现了C#语言的工具,其他语言使用者,可自行实现,欢迎提供实现的代码链接,以供需要的同学使用。 - [CSharpGeneratorForProton](https://github.com/yanghuan/CSharpGeneratorForProton) 可生成读取xml、json、protobuf的C#代码。 可将xml、json转换为protobuf的二进制格式,并生成对应的读取代码(使用protobuf-net)。 ## 实例工程 [sample目录](https://github.com/yanghuan/proton/tree/master/sample)下是一个配置好了的可在windows下直接使用的实例。已经包含了python3环境,直接运行__export.bat即可完成导出。需要添加新的Excel文件,修改__export.py中相关数组,加入即可。 ## 命令行参数 ```cmd usage python proton.py [-p filelist] [-f outfolder] [-e format] Arguments -p : input excel files, use , or ; or space to separate -f : out folder -e : format, json or xml or lua Options -s :sign, controls whether the column is exported, defalut all export -t : suffix, export file suffix -r : the separator of object field, default is ; you can use it to change -m : use the count of multiprocesses to export, default is cpu count -c : a file path, save the excel structure to json, the external program uses this file to automatically generate the read code -h : print this help message and exit -x : don't append 's' on names ``` ## 文档 格式说明 https://github.com/yanghuan/proton/wiki/document_zh FAQ https://github.com/yanghuan/proton/wiki/FAQ_zh ## 交流讨论 - [常见问题](https://github.com/yanghuan/proton/wiki/FAQ_zh) - 邮箱:sy.yanghuan@gmail.com - QQ群:715350749 ## *许可证* [Apache 2.0 license](https://github.com/yanghuan/proton/blob/master/LICENSE). ================================================ FILE: nested_parser.py ================================================ #encoding=utf-8 import string _OPEN_TO_CLOSE = { '{': '}', '[': ']', '(': ')', } _CLOSE_SET = set(_OPEN_TO_CLOSE.values()) def split_top_level(text, delimiter, skip_empty = False): if text is None: return [] if not delimiter: raise ValueError('delimiter can not be empty') values = [] stack = [] start = 0 i = 0 n = len(text) while i < n: c = text[i] if c == '\\' and i + 1 < n: i += 2 continue if c in _OPEN_TO_CLOSE: stack.append(_OPEN_TO_CLOSE[c]) i += 1 continue if c in _CLOSE_SET: if not stack or c != stack[-1]: raise ValueError('%s is not a legal nested expression' % text) stack.pop() i += 1 continue if not stack and text.startswith(delimiter, i): value = text[start:i] if not skip_empty or value: values.append(value) i += len(delimiter) start = i continue i += 1 if stack: raise ValueError('%s is not a legal nested expression' % text) value = text[start:] if not skip_empty or value: values.append(value) return values def unwrap_container(text, begin, end): value = text.strip() if len(value) >= 2 and value[0] == begin and value[-1] == end: return value[1:-1] return value def split_list_values(value): return split_top_level(unwrap_container(value, '[', ']'), ',', True) def split_obj_type_fields(type_, separator): return split_top_level(unwrap_container(type_, '{', '}'), separator, True) def split_obj_values(value, separator): return split_top_level(unwrap_container(value, '{', '}'), separator, True) def split_field_declaration(text): declaration = text.strip() if not declaration: raise ValueError('field declaration can not be empty') values = [] stack = [] start = None i = 0 n = len(declaration) while i < n: c = declaration[i] if c == '\\' and i + 1 < n: if start is None: start = i i += 2 continue if c in _OPEN_TO_CLOSE: if start is None: start = i stack.append(_OPEN_TO_CLOSE[c]) i += 1 continue if c in _CLOSE_SET: if not stack or c != stack[-1]: raise ValueError('%s is not a legal field declaration' % declaration) stack.pop() i += 1 continue if c in string.whitespace and not stack: if start is not None: values.append(declaration[start:i]) start = None i += 1 continue if start is None: start = i i += 1 if stack: raise ValueError('%s is not a legal field declaration' % declaration) if start is not None: values.append(declaration[start:]) if len(values) < 2: raise ValueError('%s is not a legal field declaration' % declaration) return (' '.join(values[:-1]), values[-1]) ================================================ FILE: proton.py ================================================ #encoding=utf-8 ''' Copyright YANG Huan (sy.yanghuan@gmail.com) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import sys if sys.version_info < (3, 0): print('python version need more than 3.x') sys.exit(1) import os import string import collections import codecs import getopt import re import json import traceback import multiprocessing import xml.etree.ElementTree as ElementTree import xml.dom.minidom as minidom import sxl import nested_parser def fillvalue(parent, name, value, isschema): if isinstance(parent, list): parent.append(value) else: if isschema and not re.match('^_|[a-zA-Z]\w*$', name): raise ValueError('%s is a illegal identifier' % name) parent[name] = value def getindex(infos, name): return next((i for i, j in enumerate(infos) if j == name), -1) def getcellvalue(value): return str(value) if value is not None else '' def getscemainfo(typename, description): if isinstance(typename, BindType): typename = typename.typename return [typename, description] if description else [typename] def getexportmark(sheetName): p = re.search('\|[' + string.whitespace + ']*(_|[a-zA-Z]\w+)', sheetName) return p.group(1) if p else False def issignmatch(signarg, sign): if signarg is None: return True return True if [s for s in re.split(r'[/\\, :]', sign) if s in signarg] else False def isoutofdate(srcfile, tarfile): return not os.path.isfile(tarfile) or os.path.getmtime(srcfile) > os.path.getmtime(tarfile) def gerexportfilename(root, format_, folder): filename = root + '.' + format_ return os.path.join(folder, filename) def splitspace(s): return nested_parser.split_field_declaration(s) def buildbasexml(parent, name, value, noplural = False): value = str(value) listtag = name if noplural else name + 's' if parent.tag == listtag: element = ElementTree.Element(name) element.text = value parent.append(element) else: parent.set(name, value) def buildlistxml(parent, name, list_, noplural = False): element = ElementTree.Element(name) parent.append(element) itemname = name if noplural else name[:-1] for v in list_: buildxml(element, itemname, v, noplural) def buildobjxml(parent, name, obj, noplural = False): element = ElementTree.Element(name) parent.append(element) for k, v in obj.items(): buildxml(element, k, v, noplural) def buildxml(parent, name, value, noplural = False): if isinstance(value, int) or isinstance(value, float) or isinstance(value, str): buildbasexml(parent, name, value, noplural) elif isinstance(value, list): buildlistxml(parent, name, value, noplural) elif isinstance(value, dict): buildobjxml(parent, name, value, noplural) def savexml(record, noplural = False): book = ElementTree.ElementTree() book.append = lambda e: book._setroot(e) buildxml(book, record.root, record.obj, noplural) xmlstr = ElementTree.tostring(book.getroot(), 'utf-8') dom = minidom.parseString(xmlstr) with codecs.open(record.exportfile, 'w', 'utf-8') as f: dom.writexml(f, '', ' ', '\n', 'utf-8') print('save %s from %s in %s' % (record.exportfile, record.sheet.name, record.path)) def newline(count): return '\n' + ' ' * count def tolua(obj, indent = 1): if isinstance(obj, int) or isinstance(obj, float) or isinstance(obj, str): yield json.dumps(obj, ensure_ascii = False) else: yield '{' islist = isinstance(obj, list) isfirst = True for i in obj: if isfirst: isfirst = False else: yield ',' yield newline(indent) if not islist: k = i i = obj[k] yield k yield ' = ' for part in tolua(i, indent + 1): yield part yield newline(indent - 1) yield '}' def toycl(obj, indent = 0): islist = isinstance(obj, list) for i in obj: yield newline(indent) if not islist: k = i i = obj[k] yield k if isinstance(i, int) or isinstance(i, float) or isinstance(i, str): if not islist: yield ' = ' yield json.dumps(i, ensure_ascii = False) else: if not islist: yield ' ' yield '{' for part in toycl(i, indent + 1): yield part yield newline(indent) yield '}' class BindType: def __init__(self, type_): self.typename = type_ def __eq__(self, other): return self.typename == other class Record: def __init__(self, path, sheet, exportfile, root, item, obj, exportmark): self.path = path self.sheet = sheet self.exportfile = exportfile self.root = root self.item = item self.setobj(obj) self.exportmark = exportmark def setobj(self, obj): self.schema = obj[0] if obj else None self.obj = obj[1] if obj else None class Constraint: def __init__(self, mark, filed): self.mark = mark self.field = filed class Exporter: configsheettitles = ('name', 'value', 'type', 'sign', 'description') spacemaxrowcount = 3 def __init__(self, context): self.context = context self.records = [] def checkstringescape(self, t, v): return v if not v or not 'string' in t else v.replace('\\n', '\n').replace('\,', '\0').replace('\\' + self.context.objseparator, '\a') def stringescape(self, s): return s.replace('\0', ',').replace('\a', self.context.objseparator) def pluralname(self, name): return name if self.context.noplural else name + 's' def gettype(self, type_): if type_[-2] == '[' and type_[-1] == ']': return 'list' if type_[0] == '{' and type_[-1] == '}': return 'obj' if type_ in ('int', 'double', 'string', 'bool', 'long', 'float'): return type_ p = re.search('(int|string|long)[' + string.whitespace + ']*\((\S+)\.(\S+)\)', type_) if p: type_ = BindType(p.group(1)) type_.mark = p.group(2) type_.field = p.group(3) return type_ raise ValueError('%s is not a legal type' % type_) def buildlistexpress(self, parent, type_, name, value, isschema): basetype = type_[:-2] list_ = [] if isschema: self.buildexpress(list_, basetype, name, None, isschema) list_ = getscemainfo(list_[0], value) else: valuelist = nested_parser.split_list_values(value) for v in valuelist: self.buildexpress(list_, basetype, name, v, False, True) fillvalue(parent, self.pluralname(name), list_, isschema) def buildobjexpress(self, parent, type_, name, value, isschema): obj = collections.OrderedDict() fieldnamestypes = nested_parser.split_obj_type_fields(type_, self.context.objseparator) if isschema: for i in range(0, len(fieldnamestypes)): fieldtype, fieldname = splitspace(fieldnamestypes[i]) self.buildexpress(obj, fieldtype, fieldname, None, isschema) obj = getscemainfo(obj, value) else: fieldValues = nested_parser.split_obj_values(value, self.context.objseparator) for i in range(0, len(fieldnamestypes)): if i < len(fieldValues): fieldtype, fieldname = splitspace(fieldnamestypes[i]) self.buildexpress(obj, fieldtype, fieldname, fieldValues[i], False, True) fillvalue(parent, name, obj, isschema) def buildbasexpress(self, parent, type_, name, value, isschema, inobj): typename = self.gettype(type_) if isschema: value = getscemainfo(typename, value) else: if typename != 'string' and value.isspace(): return if typename == 'int' or typename == 'long': value = int(float(value)) elif typename == 'double' or typename == 'float': value = float(value) elif typename == 'string': if value.endswith('.0'): # may read is like "123.0" try: value = str(int(float(value))) except ValueError: value = self.stringescape(str(value)) else: value = self.stringescape(str(value)) if inobj and len(value) > 0 and value[0] == '\n': value = value[1:] elif typename == 'bool': try: value = int(float(value)) value = False if value == 0 else True except ValueError: value = value.lower() if value in ('false', 'no', 'off'): value = False elif value in ('true', 'yes', 'on'): value = True else: raise ValueError('%s is a illegal bool value' % value) fillvalue(parent, name, value, isschema) def buildexpress(self, parent, type_, name, value, isschema = False, inobj = False): typename = self.gettype(type_) if typename == 'list': self.buildlistexpress(parent, type_, name, value, isschema) elif typename == 'obj': self.buildobjexpress(parent, type_, name, value, isschema) else: self.buildbasexpress(parent, type_, name, value, isschema, inobj) def getrootname(self, exportmark, isitem): root = self.pluralname(exportmark) if isitem else exportmark return root + (self.context.extension or '') def export(self, path): self.path = path data = sxl.Workbook(self.path) cout = None for sheetname in [i for i in data.sheets if type(i) is str]: self.sheetname = sheetname exportmark = getexportmark(sheetname) if exportmark: sheet = data.sheets[sheetname] coutmark = sheetname.endswith('<<') configtitleinfo = self.getconfigsheetfinfo(sheet) if not configtitleinfo: root = self.getrootname(exportmark, not coutmark) item = exportmark else: root = self.getrootname(exportmark, False) item = None if not cout: self.checksheetname(self.path, sheetname, root) exportfile = gerexportfilename(root, self.context.format, self.context.folder) if isoutofdate(self.path, exportfile): if item: exportobj = self.exportitemsheet(sheet) else: exportobj = self.exportconfigsheet(sheet, configtitleinfo) if coutmark: if not item: cout = exportobj else: cout = (collections.OrderedDict(), collections.OrderedDict()) itemkey = self.pluralname(item) cout[0][itemkey] = [[exportobj[0]]] item = None exportobj = cout obj = exportobj[1] if obj: cout[1][itemkey] = obj self.records.append(Record(self.path, sheet, exportfile, root, item, exportobj, exportmark)) else: print('%s is not changed' % (self.path)) break else: if item: exportobj = self.exportitemsheet(sheet) cout[0][self.pluralname(item)] = [[exportobj[0]]] obj = exportobj[1] if obj: cout[1][self.pluralname(item)] = obj else: exportobj = self.exportconfigsheet(sheet, configtitleinfo) cout[0].update(exportobj[0]) obj = exportobj[1] if obj: cout[1].update(obj) return self.saves() def getconfigsheetfinfo(self, sheet): titles = sheet.head(1)[0] nameindex = getindex(titles, self.configsheettitles[0]) valueindex = getindex(titles, self.configsheettitles[1]) typeindex = getindex(titles, self.configsheettitles[2]) signindex = getindex(titles, self.configsheettitles[3]) descriptionindex = getindex(titles, self.configsheettitles[4]) if nameindex != -1 and valueindex != -1 and typeindex != -1: return (nameindex, valueindex, typeindex, signindex, descriptionindex) else: return None def exportitemsheet(self, sheet): rows = iter(sheet.rows) descriptions = next(rows) types = next(rows) names = next(rows) signs = next(rows) ncols = len(types) titleinfos = [] schemaobj = collections.OrderedDict() try: for colindex in range(ncols): type_ = getcellvalue(types[colindex]).strip() name = getcellvalue(names[colindex]).strip() signmatch = issignmatch(self.context.sign, getcellvalue(signs[colindex]).strip()) titleinfos.append((type_, name, signmatch)) if self.context.codegenerator: if type_ and name and signmatch: self.buildexpress(schemaobj, type_, name, descriptions[colindex], True) except Exception as e: e.args += ('%s has a title error, %s at %d column in %s' % (sheet.name, (type_, name), colindex + 1, self.path) , '') raise e list_ = [] hasexport = next((i for i in titleinfos if i[0] and i[1] and i[2]), False) if hasexport: try: spacerowcount = 0 self.rowindex = 3 for row in rows: self.rowindex += 1 item = collections.OrderedDict() firsttext = getcellvalue(row[0]).strip() if not firsttext: spacerowcount += 1 if spacerowcount >= self.spacemaxrowcount: # if space row is than max count, skil follow rows break if not firsttext or firsttext[0] == '#': # current line skip continue skiptokenindex = None if firsttext[0] == '!': nextpos = firsttext.find('!', 1) if nextpos >= 2: signtoken = firsttext[1: nextpos] if issignmatch(self.context.sign, signtoken.strip()): continue else: skiptokenindex = len(signtoken) + 2 for self.colindex in range(ncols): signmatch = titleinfos[self.colindex][2] if signmatch: type_ = titleinfos[self.colindex][0] name = titleinfos[self.colindex][1] value = getcellvalue(row[self.colindex]) if skiptokenindex and self.colindex == 0: value = value.lstrip()[skiptokenindex:] if type_ and name and value: self.buildexpress(item, type_, name, self.checkstringescape(type_, value)) spacerowcount = 0 if item: list_.append(item) except Exception as e: e.args += ('%s has a error in %d row %d(%s) column in %s' % (sheet.name, self.rowindex + 1, self.colindex + 1, name, self.path) , '') raise e return (schemaobj, list_) def exportconfigsheet(self, sheet, titleindexs): rows = iter(sheet.rows) next(rows) nameindex = titleindexs[0] valueindex = titleindexs[1] typeindex = titleindexs[2] signindex = titleindexs[3] descriptionindex = titleindexs[4] schemaobj = collections.OrderedDict() obj = collections.OrderedDict() try: spacerowcount = 0 self.rowindex = 0 for row in rows: self.rowindex += 1 name = getcellvalue(row[nameindex]).strip() value = getcellvalue(row[valueindex]) type_ = getcellvalue(row[typeindex]).strip() description = getcellvalue(row[descriptionindex]).strip() if signindex > 0: sign = getcellvalue(row[signindex]).strip() if not issignmatch(self.context.sign, sign): continue if not name and not value and not type_: spacerowcount += 1 if spacerowcount >= self.spacemaxrowcount: break # if space row is than max count, skil follow rows continue if name and type_: if(name[0] != '#'): # current line skip if self.context.codegenerator: self.buildexpress(schemaobj, type_, name, description, True) if value: self.buildexpress(obj, type_, name, self.checkstringescape(type_, value)) spacerowcount = 0 except Exception as e: e.args += ('%s has a error in %d row (%s, %s, %s) in %s' % (sheet.name, self.rowindex + 1, type_, name, value, self.path) , '') raise e return (schemaobj, obj) def saves(self): schemas = [] for r in self.records: if r.obj: self.save(r) if self.context.codegenerator: # has code generator schemas.append({ 'path': r.path, 'exportfile' : r.exportfile, 'root' : r.root, 'item' : r.item or r.exportmark, 'schema' : r.schema }) return schemas def save(self, record): if not record.obj: return if not os.path.isdir(self.context.folder): os.makedirs(self.context.folder) if self.context.format == 'json': jsonstr = json.dumps(record.obj, ensure_ascii = False, indent = 2) with codecs.open(record.exportfile, 'w', 'utf-8') as f: f.write(jsonstr) print('save %s from %s in %s' % (record.exportfile, record.sheet.name, record.path)) elif self.context.format == 'xml': if record.item: record.obj = { self.pluralname(record.item) : record.obj } savexml(record, self.context.noplural) elif self.context.format == 'lua': luastr = "".join(tolua(record.obj)) with codecs.open(record.exportfile, 'w', 'utf-8') as f: f.write('return ') f.write(luastr) print('save %s from %s in %s' % (record.exportfile, record.sheet.name, record.path)) elif self.context.format == 'ycl': g = toycl(record.obj) next(g) # skip first newline yclstr = "".join(g) with codecs.open(record.exportfile, 'w', 'utf-8') as f: f.write(yclstr) print('save %s from %s in %s' % (record.exportfile, record.sheet.name, record.path)) def checksheetname(self, path, sheetname, root): r = next((r for r in self.records if r.root == root), False) if r: raise ValueError('%s in %s is already defined in %s' % (root, path, r.path)) def export(context, path): try: return Exporter(context).export(path) except Exception as e: return traceback.format_exc() def exportpack(args): return export(args[0], args[1]) def exportfiles(context): paths = [] for path in re.split(r'[,;|]+', context.path.strip()): if path: if not os.path.isfile(path): raise ValueError('%s is not exists' % path) elif path in paths: raise ValueError('%s is already has' % path) paths.append(path) errors = [] schemas = [] def append(result): if type(result) is str: errors.append(result) else: schemas.extend(result) if context.multiprocessescount is None or context.multiprocessescount > 1: with multiprocessing.Pool(context.multiprocessescount) as p: for i in p.map(exportpack, [(context, x) for x in paths]): append(i) else: for path in paths: result = export(context, path) append(result) if schemas: if context.codegenerator: schemasjson = json.dumps(schemas, ensure_ascii = False, indent = 2) dir = os.path.dirname(context.codegenerator) if dir and not os.path.isdir(dir): os.makedirs(dir) with codecs.open(context.codegenerator, 'w', 'utf-8') as f: f.write(schemasjson) exports = [] for schema in schemas: exportfile = schema['exportfile'] r = next((r for r in exports if r['exportfile'] == exportfile), False) if r: errors.append('%s in %s is already defined in %s' % (schema['root'], schema['path'], r['path'])) os.remove(exportfile) else: exports.append(schema) if errors: print('\n\n'.join(errors)) sys.exit(-1) print("export finsish successful!!!") class Context: '''usage python proton.py [-p filelist] [-f outfolder] [-e format] Arguments -p : input excel files, use , or ; or space to separate -f : out folder -e : format, json or xml or lua or ycl Options -s :sign, controls whether the column is exported, defalut all export -t : suffix, export file suffix -r : the separator of object field, default is ; you can use it to change -m : use the count of multiprocesses to export, default is cpu count -c : a file path, save the excel structure to json the external program uses this file to automatically generate the read code -x : disable auto plural naming (do not append 's') -h : print this help message and exit https://github.com/yanghuan/proton''' if __name__ == '__main__': print('argv:' , sys.argv) opst, args = getopt.getopt(sys.argv[1:], 'p:f:e:s:t:r:m:c:xh') context = Context() context.path = None context.folder = '.' context.format = 'json' context.sign = None context.extension = None context.objseparator = ';' context.codegenerator = None context.multiprocessescount = None context.noplural = False for op, v in opst: if op == '-p': context.path = v elif op == '-f': context.folder = v elif op == '-e': context.format = v.lower() elif op == '-s': context.sign = v elif op == '-t': context.extension = v elif op == '-r': context.objseparator = v elif op == '-m': context.multiprocessescount = int(v) if v is not None else None elif op == '-c': context.codegenerator = v elif op == '-x': context.noplural = True elif op == '-h': print(Context.__doc__) sys.exit() if not context.path: print(Context.__doc__) sys.exit(2) exportfiles(context) ================================================ FILE: sample/README.md ================================================ This is a good configuration can be used directly in the Windows instance. Already contains a python3 environment, directly run __export.bat to complete the export. Need to add a new Excel file, modify the __export.py related array. ================================================ FILE: sample/__export.bat ================================================ tools\py37\py37.exe __export.py ================================================ FILE: sample/__export.py ================================================ #encoding=utf-8 # Need to export the public configuration file (client, server needs) 需要导出的公共配置文件(客户端,服务器都需要) EXPORT_FILES = [ "hero.xlsx", "mount.xlsx", ] # Additional configuration files that the client needs to export (only the client needs) 客户端额外需要导出的额外配置文件(仅客户端需要) EXPORT_CLIENT_ONLY = [ "text.xlsx" ] # Server-side need to export the configuration file (only the server needs) 服务器端额外需要导出的配置文件(仅服务器需要) EXPORT_SERVER_ONLY = [ ] # do not modify the following import os import platform import traceback import shutil import sys exportscript = '../proton.py' pythonpath = 'tools\\py37\\py37.exe ' if platform.system() == 'Windows' else 'python ' class ExportError(Exception): pass def export(filelist, format, sign, outfolder, suffix, schema): cmd = r' -p "' + ','.join(filelist) + '" -f ' + outfolder + ' -e ' + format + ' -s ' + sign if suffix: cmd += ' -t ' + suffix if schema: cmd += ' -c ' + schema cmd = pythonpath + exportscript + cmd code = os.system(cmd) if code != 0: raise ExportError('export excel fail, please see print') def codegenerator(schema, outfolder, namespace, suffix): if os.path.exists(schema): cmd = 'tools\CSharpGeneratorForProton\CSharpGeneratorForProton.exe ' + '-n ' + namespace + ' -f ' + outfolder + ' -p ' + schema if suffix: cmd += ' -t ' + suffix code = os.system(cmd) os.remove(schema) if code != 0: raise ExportError('codegenerator fail, please see print') def exportserver(): export(EXPORT_FILES + EXPORT_SERVER_ONLY, 'json', 'server', 'config_server', 'Config', 'schemaserver.json') codegenerator('schemaserver.json', 'config_server/ConfigGenerator/Template', 'Ice.Project.Config', 'Template') def exportclient(): export(EXPORT_FILES + EXPORT_CLIENT_ONLY, 'lua', 'client', 'config_client', 'Template', None) def main(): try: exportserver() exportclient() print("all operation finish successful") return 0 except ExportError as e: print(e) print("has error, see logs, please return key to exit") input() return 1 except Exception as e: traceback.print_exc() print("has error, see logs, please return key to exit") input() return 1 if __name__ == '__main__': sys.exit(main()) ================================================ FILE: sample/tools/CSharpGeneratorForProton/README.md ================================================ [English](https://github.com/sy-yanghuan/CSharpGeneratorForProton#csharpgeneratorforproton) [Chinese](https://github.com/sy-yanghuan/CSharpGeneratorForProton#csharpgeneratorforproton-1) # CSharpGeneratorForProton CSharpGeneratorForProton generated C # code that reads xml, json, protobuf for [proton] (https://github.com/sy-yanghuan/proton). And xml, json can be converted to protobuf binary format (using protobuf-net). ## Command Line Parameters ```cmd Usage: CSharpGeneratorForProton [-p schemaFile] [-f output] [-n namespace] Arguments -p : schema file, Proton output -f : output directory, will put the generated class code -n : namespace of the generated class Options -t : suffix, generates the suffix for the class -e : open convert exportfile to protobuf -d : protobuf binary data output directory, use only when '-e' exists -b : protobuf binary data file extension, use only when '-e' exists -h : show the help message and exit ``` ## Generated Code Import Generated C # code is not associated with the specific format, the specific read operation, are assigned to the GeneratorUtility class for processing, so the need to add the corresponding class into the project. The code is under the [Directory GeneratorUtility] (https://github.com/sy-yanghuan/CSharpGeneratorForProton/tree/master/CSharpGeneratorForProton/GeneratorUtility), you can modify the code according to the specific requirements, such as replacing the namespace, replace the read Library and so on. - [GeneratorUtility for xml](https://github.com/sy-yanghuan/CSharpGeneratorForProton/blob/master/CSharpGeneratorForProton/CSharpGeneratorForProton/GeneratorUtility/XmlLoader.cs) - [GeneratorUtility for json](https://github.com/sy-yanghuan/CSharpGeneratorForProton/blob/master/CSharpGeneratorForProton/CSharpGeneratorForProton/GeneratorUtility/JsonLoader.cs) - [GeneratorUtility for protobuf](https://github.com/sy-yanghuan/CSharpGeneratorForProton/blob/master/CSharpGeneratorForProton/CSharpGeneratorForProton/GeneratorUtility/ProtobufLoader.cs) ## Example [Example] (https://github.com/sy-yanghuan/CSharpGeneratorForProton/tree/master/CSharpGeneratorForProton/Example), A project is an instance of a full load configuration that generated by [proton's sample](https://github.com/sy-yanghuan/proton/tree/master/sample). ## *License* [Apache 2.0 license](https://github.com/sy-yanghuan/CSharpGeneratorForProton/blob/master/LICENSE). _____________________ # CSharpGeneratorForProton CSharpGeneratorForProton 是为[proton] (https://github.com/sy-yanghuan/proton)产生读取xml、json、protobuf的C#的代码。其还可将xml、jsond配置文件转换成protobuf二进制格式。 ## 命令行参数 ```cmd Usage: CSharpGeneratorForProton [-p schemaFile] [-f output] [-n namespace] Arguments -p : schema file, Proton output -f : output directory, will put the generated class code -n : namespace of the generated class Options -t : suffix, generates the suffix for the class -e : open convert exportfile to protobuf -d : protobuf binary data output directory, use only when '-e' exists -b : protobuf binary data file extension, use only when '-e' exists -h : show the help message and exit ``` ## 导入生成的代码 生成的C#代码并不与具体格式相关联,具体读取操作,均外派给GeneratorUtility工具类进行处理,所以还需将对应工具类添加入工程。代码均在[目录GeneratorUtility](https://github.com/sy-yanghuan/CSharpGeneratorForProton/tree/master/CSharpGeneratorForProton/CSharpGeneratorForProton/GeneratorUtility)下,可按具体使用要求修改其代码,例如更换命名空间、更换读取库等。 - [GeneratorUtility for xml](https://github.com/sy-yanghuan/CSharpGeneratorForProton/blob/master/CSharpGeneratorForProton/CSharpGeneratorForProton/GeneratorUtility/XmlLoader.cs) - [GeneratorUtility for json](https://github.com/sy-yanghuan/CSharpGeneratorForProton/blob/master/CSharpGeneratorForProton/CSharpGeneratorForProton/GeneratorUtility/JsonLoader.cs) - [GeneratorUtility for protobuf](https://github.com/sy-yanghuan/CSharpGeneratorForProton/blob/master/CSharpGeneratorForProton/CSharpGeneratorForProton/GeneratorUtility/ProtobufLoader.cs) ## 实例工程 [Example](https://github.com/sy-yanghuan/CSharpGeneratorForProton/tree/master/CSharpGeneratorForProton/Example)工程是一个完整的载入配置的实例,其载入配置是通过[proton的实例](https://github.com/sy-yanghuan/proton/tree/master/sample)导出的。 ##*许可证* [Apache 2.0 license](https://github.com/sy-yanghuan/CSharpGeneratorForProton/blob/master/LICENSE). ================================================ FILE: sample/tools/py37/README.md ================================================ # pyexe.exe https://github.com/manthey/pyexe [![Build status](https://ci.appveyor.com/api/projects/status/n18f0997k18x87lw/branch/master?svg=true)](https://ci.appveyor.com/project/manthey/pyexe/branch/master) Here is a stand-alone version of python that is a single Windows executable. It consists of the most recent versions of Python (with builds for 2.7, 3.5, and 3.6 each in 32-bit and 64-bit versions), pywin32, psutil, six, pip, setuptools, and includes all packages that can be included without additional dlls, excepting tkinter. See the appveyor script for build instructions. ## Installing other modules Python is most useful with additional modules. The stand-alone executable can use pip to install modules from pypi to the local directory. For instance: ```bash py36-64.exe -m pip install --no-cache-dir --target . --upgrade sympy ``` Use `-m pip` to run the pip module. Use `--no-cache-dir` to avoid writing files to the user's data directory. Use `--target .` to install to the current directory, allowing you to import the modules easily. Use `--upgrade` to replace existing files, such as the common `bin` directory. Note that using `--upgrade` will overwrite or discard existing files, which may not be what you want (the `bin` directory will end up with just files for the most recently installed package). ## Differences from installed Python Although the stand-alone Python attempts to have the same features as a normally installed Python, there are some differences. - If command line options are specified, there may be some differences in `sys.flags`, since it is read-only and cannot be altered after start. - `PYTHONHOME` is ignored. This option doesn't make sense for a stand-alone version. - `-V` and `PYTHONVERBOSE` don't print exactly the same information as installed Python, partly because the verbosity is increased after some modules are already imported. - `--check-hash-based-pycs` is ignored. This option cannot be changed after the Python executable starts. - `-R` and `PYTHONHASHSEED` are ignored. These options cannot be changed after the Python executable starts. - `PYTHONCASEOK` is not honored on Python 2.7. It behaves as installed Python for Python 3.x, i.e., `-E` does not ignore it, but `-I` does, see [Python issue 16826](https://bugs.python.org/issue16826) for some discussion. - Not all environment variables are handled, such as: `PYTHONIOENCODING`, `PYTHONFAULTHANDLER`, `PYTHONLEGACYWINDOWSFSENCODING`, `PYTHONLEGACYWINDOWSSTDIO`, `PYTHONMALLOC`, `PYTHONCOERCECLOCALE`, `PYTHONDEVMODE`. Some of these are ignored; some are used and cannot be suppressed with `-E` or `-I`. Many of these could be handled properly with additional work. ================================================ FILE: sample/tools/py37/sxl/__init__.py ================================================ from .sxl import Workbook, col2num, num2col __version__ = '0.0.1a10' ================================================ FILE: sample/tools/py37/sxl/sxl.py ================================================ """ xl.py - python library to deal with *big* Excel files. """ from abc import ABC from collections import namedtuple, ChainMap from contextlib import contextmanager import datetime import io from itertools import zip_longest import os import re import string import xml.etree.cElementTree as ET from zipfile import ZipFile # ISO/IEC 29500:2011 in Part 1, section 18.8.30 STANDARD_STYLES = { '0' : 'General', '1' : '0', '2' : '0.00', '3' : '#,##0', '4' : '#,##0.00', '9' : '0%', '10' : '0.00%', '11' : '0.00E+00', '12' : '# ?/?', '13' : '# ??/??', '14' : 'mm-dd-yy', '15' : 'd-mmm-yy', '16' : 'd-mmm', '17' : 'mmm-yy', '18' : 'h:mm AM/PM', '19' : 'h:mm:ss AM/PM', '20' : 'h:mm', '21' : 'h:mm:ss', '22' : 'm/d/yy h:mm', '37' : '#,##0 ;(#,##0)', '38' : '#,##0 ;[Red](#,##0)', '39' : '#,##0.00;(#,##0.00)', '40' : '#,##0.00;[Red](#,##0.00)', '45' : 'mm:ss', '46' : '[h]:mm:ss', '47' : 'mmss.0', '48' : '##0.0E+0', '49' : '@', } ExcelErrorValue = namedtuple('ExcelErrorValue', 'value') class ExcelObj(ABC): """ Abstract base class for other excel objects (workbooks, worksheets, etc.) """ main_ns = 'http://schemas.openxmlformats.org/spreadsheetml/2006/main' rel_ns = 'http://schemas.openxmlformats.org/officeDocument/2006/relationships' @staticmethod def tag_with_ns(tag, ns): "Return XML tag with namespace that can be used with ElementTree" return '{%s}%s' % (ns, tag) @staticmethod def col_num_to_letter(n): "Return column letter for column number ``n``" string = "" while n > 0: n, remainder = divmod(n - 1, 26) string = chr(65 + remainder) + string return string @staticmethod def col_letter_to_num(letter): "Return column number for column letter ``letter``" assert re.match(r'[A-Z]+', letter) num = 0 for char in letter: num = num * 26 + (ord(char.upper()) - ord('A')) + 1 return num class Worksheet(ExcelObj): """ Excel worksheet """ def __init__(self, workbook, name, number, location=''): self._used_area = None self._row_length = None self._num_rows = None self._num_cols = None self.workbook = self.wb = workbook self.name = name self.number = number self.location = location or 'xl/worksheets/sheet{number}.xml' @contextmanager def get_sheet_xml(self): "Get a pointer to the xml file underlying the current sheet" with self.workbook.xls.open(self.location) as f: yield io.TextIOWrapper(f, self.workbook.encoding) @property def range(self): "Return data found in range of cells" return Range(self) @property def rows(self): "Iterator that will yield every row in this sheet between start/end" return Range(self) def _set_dimensions(self): "Return the 'standard' row length of each row in this worksheet" if ':' not in self.used_area: self._num_cols = 0 self._num_rows = 0 else: _, end = self.used_area.split(':') last_col, last_row = re.match(r"([A-Z]+)([0-9]+)", end).groups() self._num_cols = self.col_letter_to_num(last_col) self._num_rows = int(last_row) def _get_num_cols(self): "Return the number of standard columns in this worksheet" if self._num_cols is None: self._set_dimensions() return self._num_cols def _set_num_cols(self, n): "Set the number of columns in the sheet (use with caution!)" self._num_cols = n num_cols = property(_get_num_cols, _set_num_cols) @property def num_rows(self): "Return the total number of rows used in this worksheet" if self._num_rows is None: self._set_dimensions() return self._num_rows @property def used_area(self): "Return the used area of this sheet" if self._used_area is not None: return self._used_area dimension_tag = self.tag_with_ns('dimension', self.main_ns) sheet_data_tag = self.tag_with_ns('sheetData', self.main_ns) with self.get_sheet_xml() as sheet: for event, elem in ET.iterparse(sheet, events=('start', 'end')): if event == 'start': if elem.tag == dimension_tag: used_area = elem.get('ref') if used_area != 'A1': break if elem.tag == sheet_data_tag: # unreliable if list(elem): num_cols = len(list(elem)[0]) used_area = f'A1:{num2col(num_cols)}{len(elem)}' break elem.clear() self._used_area = used_area return used_area def head(self, num_rows=10): "Return first 'num_rows' from this worksheet" return self.rows[:num_rows+1] # 1-based def cat(self, tab=1): "Return/yield all rows from this worksheet" dat = self.rows[1] # 1 based! XLRec = namedtuple('XLRec', dat[0], rename=True) # pylint: disable=C0103 for row in self.rows[1:]: yield XLRec(*row) class Range(ExcelObj): """ Excel ranges """ def __init__(self, ws): self.worksheet = self.ws = ws self.start = None self.stop = None self.step = None self.colstart = None self.colstop = None self.colstep = None def __len__(self): return self.worksheet.num_rows def __iter__(self): with self.ws.get_sheet_xml() as xml_doc: row_tag = self.tag_with_ns('row', self.main_ns) c_tag = self.tag_with_ns('c', self.main_ns) v_tag = self.tag_with_ns('v', self.main_ns) row = [] this_row = -1 next_row = 1 if self.start is None else self.start # last_row = self.ws.num_rows + 1 if self.stop is None else self.stop last_row = 1_048_576 if self.stop is None else self.stop context = ET.iterparse(xml_doc, events=('start', 'end')) context = iter(context) event, root = next(context) for event, elem in context: if event == 'end': if elem.tag == row_tag: this_row = int(elem.get('r')) if this_row >= last_row: break while next_row < this_row: yield self._row([]) next_row += 1 if this_row == next_row: yield self._row(row) next_row += 1 row = [] this_row = -1 root.clear() elif elem.tag == c_tag: val = elem.findtext(v_tag) if not val: is_elem = elem.find(self.tag_with_ns('is', self.main_ns)) if is_elem: val = is_elem.findtext(self.tag_with_ns('t', self.main_ns)) if val: # only append cells with values cell = ['', '', '', ''] # ref, type, value, style cell[0] = elem.get('r') # cell ref cell[1] = elem.get('t') # cell type if cell[1] == 's': # string cell[2] = self.ws.workbook.strings[int(val)] else: cell[2] = val cell[3] = elem.get('s') # cell style row.append(cell) def __getitem__(self, rng): if isinstance(rng, slice): if rng.start is not None: self.start = rng.start if rng.stop is not None: self.stop = rng.stop if rng.step is not None: self.step = rng.step matx = [_ for _ in self] self.start = self.stop = self.step = None return matx elif isinstance(rng, str): if ':' in rng: beg, end = rng.split(':') else: beg = end = rng cell_split = lambda cell: re.match(r"([A-Z]+)([0-9]+)", cell).groups() first_col, first_row = cell_split(beg) last_col, last_row = cell_split(end) first_col = self.col_letter_to_num(first_col) - 1 # python addressing first_row = int(first_row) last_col = self.col_letter_to_num(last_col) last_row = int(last_row) self.start = first_row self.stop = last_row + 1 self.colstart = first_col self.colstop = last_col matx = [_ for _ in self] # reset self.start = self.stop = self.step = None self.colstart = self.colstop = self.colstep = None return matx elif isinstance(rng, int): self.start = rng self.stop = rng + 1 matx = [_ for _ in self] self.start = self.stop = self.step = None return matx else: raise NotImplementedError("Cannot understand request") def __call__(self, rng): return self.__getitem__(rng) def _row(self, row): lst = [None] * self.ws.num_cols col_re = re.compile(r'[A-Z]+') col_pos = 0 for cell in row: # apparently, 'r' attribute is optional and some MS products don't # spit it out. So we default to incrementing from last known col # (or 0 if we are at the beginning) when r is not available. if cell[0]: col = cell[0][:col_re.match(cell[0]).end()] col_pos = self.col_letter_to_num(col) - 1 else: col_pos += 1 if col_pos >= len(lst): # dimensions may not be set right in worksheet extend_by = col_pos - len(lst) + 1 self.ws.num_cols += extend_by lst += [None for _ in range(extend_by)] try: style = self.ws.wb.styles[int(cell[3])] except Exception as e: style = '' # convert to python value (if necessary) celltype = cell[1] cellvalue = cell[2] if celltype in ('str', 's', 'inlineStr'): lst[col_pos] = cellvalue elif celltype == 'b': lst[col_pos] = bool(int(cellvalue)) elif celltype == 'e': lst[col_pos] = ExcelErrorValue(cellvalue) elif celltype == 'bl': lst[col_pos] = None # Lastly, default to a number else: lst[col_pos] = float(cellvalue) colstart = 0 if self.colstart is None else self.colstart colstop = self.ws.num_cols if self.colstop is None else self.colstop return lst[colstart:colstop] class Workbook(ExcelObj): """ Excel workbook """ def __init__(self, file_obj, workbook_path=None, encoding='utf8'): self.xls = ZipFile(file_obj) self.encoding = encoding self._strings = None self._sheets = None self._styles = None self.date_system = self.get_date_system() if workbook_path: self.name = os.path.basename(workbook_path) self.path = workbook_path else: self.name = self.workbook_path = '' def get_date_system(self): "Determine the date system used by the current workbook" with self.xls.open('xl/workbook.xml') as xml_doc: tree = ET.parse(io.TextIOWrapper(xml_doc, self.encoding)) tag = self.tag_with_ns('workbookPr', self.main_ns) tag_element = tree.find(tag) if tag_element and tag_element.get('date1904') == '1': return 1904 return 1900 @property def sheets(self): "Return list of all sheets in workbook" if self._sheets is not None: return self._sheets tag = self.tag_with_ns('sheet', self.main_ns) ref_tag = self.tag_with_ns('id', self.rel_ns) sheet_map = {} locs = {} # locations from relationship id to target location with self.xls.open('xl/_rels/workbook.xml.rels') as xml_doc: tree = ET.parse(io.TextIOWrapper(xml_doc, self.encoding)) for rshp in tree.iter(self.tag_with_ns('Relationship', 'http://schemas.openxmlformats.org/package/2006/relationships')): id = rshp.get('Id') target = rshp.get('Target') locs[id] = target with self.xls.open('xl/workbook.xml') as xml_doc: tree = ET.parse(io.TextIOWrapper(xml_doc, self.encoding)) for sheet in tree.iter(tag): name = sheet.get('name') ref = sheet.get(ref_tag) num = int(sheet.get('sheetId')) sheet = Worksheet(self, name, num, 'xl/' + locs[ref] if not locs[ref].startswith('/') else locs[ref][1:]) sheet_map[name] = sheet sheet_map[num] = sheet self._sheets = sheet_map return self._sheets @property def strings(self): "Return list of shared strings within this workbook" if self._strings is not None: return self._strings # Cannot use t element (which we were doing before). See # http://bit.ly/2J7xAPu for more info on shared strings. tag = self.tag_with_ns('si', self.main_ns) strings = [] with self.xls.open('xl/sharedStrings.xml') as xml_doc: tree = ET.parse(io.TextIOWrapper(xml_doc, self.encoding)) for elem in tree.iter(tag): strings.append(''.join(_ for _ in elem.itertext())) self._strings = strings return strings @property def styles(self): "Return list of styles used within this workbook" if self._styles is not None: return self._styles styles = [] style_tag = self.tag_with_ns('xf', self.main_ns) numfmt_tag = self.tag_with_ns('numFmt', self.main_ns) with self.xls.open('xl/styles.xml') as xml_doc: tree = ET.parse(io.TextIOWrapper(xml_doc, self.encoding)) number_fmts_table = tree.find(self.tag_with_ns('numFmts', self.main_ns)) number_fmts = {} if number_fmts_table: for num_fmt in number_fmts_table.iter(numfmt_tag): number_fmts[num_fmt.get('numFmtId')] = num_fmt.get('formatCode') number_fmts.update(STANDARD_STYLES) style_table = tree.find(self.tag_with_ns('cellXfs', self.main_ns)) if style_table: for style in style_table.iter(style_tag): fmtid = style.get('numFmtId') if fmtid in number_fmts: styles.append(number_fmts[fmtid]) self._styles = styles return styles def num_to_date(self, number): """ Return date of "number" based on the date system used in this workbook. The date system is either the 1904 system or the 1900 system depending on which date system the spreadsheet is using. See http://bit.ly/2He5HoD for more information on date systems in Excel. """ if self.date_system == 1900: # Under the 1900 base system, 1 represents 1/1/1900 (so we start # with a base date of 12/31/1899). base = datetime.datetime(1899, 12, 31) # BUT (!), Excel considers 1900 a leap-year which it is not. As # such, it will happily represent 2/29/1900 with the number 60, but # we cannot convert that value to a date so we throw an error. if number == 60: raise ValueError("Bad date in Excel file - 2/29/1900 not valid") # Otherwise, if the value is greater than 60 we need to adjust the # base date to 12/30/1899 to account for this leap year bug. elif number > 60: base = base - datetime.timedelta(days=1) else: # Under the 1904 system, 1 represent 1/2/1904 so we start with a # base date of 1/1/1904. base = datetime.datetime(1904, 1, 1) days = int(number) partial_days = number - days seconds = int(round(partial_days * 86400000.0)) seconds, milliseconds = divmod(seconds, 1000) if days < -693594: return days date = base + datetime.timedelta(days, seconds, 0, milliseconds) if days == 0: return date.time() return date # Some helper functions def num2col(num): """Convert given column letter to an Excel column number.""" result = [] while num: num, rem = divmod(num-1, 26) result[:0] = string.ascii_uppercase[rem] return ''.join(result) def col2num(ltr): num = 0 for c in ltr: if c in string.ascii_letters: num = num * 26 + (ord(c.upper()) - ord('A')) + 1 return num