34 projects
MicroTokenizer
A micro tokenizer for Chinese
corpusboard
Dashboard for CorpusFlow
rasa-chinese-service
Service package for rasa_chinese
rasa-chinese
A Chinese language extension package for Rasa
deliverable-model
A cross framework machine leaning format and API specific for deploying.
tokenizer-tools
Tools for tokenizer develope and evaluation
seq2annotation
seq2annotation
nlp-utils
Utils for NLP
nlp-dict
Chinese Dictionary for NLP
paddle-ner
A NER extractor write in PaddlePaddle
paddle-tokenizer
A tokenizer write in PaddlePaddle
rasa-contrib
Addons for Rasa
tokenflow
tokenizer
micro-toolkit
include few useful function for develop
ioflow
Input/Output abstraction layer for machine learning
seq2annotation-for-deliverable
Python Boilerplate contains all the boilerplate you need to create a Python package.
hanzi-char-featurizer
four-corner-method
corpusflow
CorpusFlow is an open source platform for Neural Language Processing. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in NLP and developers easily build and deploy NLP powered applications.
syntaxflow
Python Boilerplate contains all the boilerplate you need to create a Python package.
discourseflow
Python Boilerplate contains all the boilerplate you need to create a Python package.
utterance
Python Boilerplate contains all the boilerplate you need to create a Python package.
utteranceflow
Python Boilerplate contains all the boilerplate you need to create a Python package.
talkflow
Full function pplatform for conversational agent
conversationflow
Full function for conversation agent
semanticflow
Full function framework for NLP
tf-crf-layer
CRF layer for TensorFlow 1.x
tf-attention-layer
Attention layer for TensorFlow 1.x
s2a-nightly
seq2annotation
tf-summary-reader
A package for read data from tensorflow summary files
MicroRegEx
A micro regular expression engine
seq2label
seq2label
MicroHMM
A micro python package for HMM model
tokenizers-collection
A simple iterator for using a set of Chinese tokenizer