One internal project of Techainer discover a bug that when using python3 server.py with a content like this:
from mlchain.base import ServeModel
from model import Model
from mlchain import mlconfig
# mlconfig.load_config('mlconfig.yaml')
model = Model(weight_path=mlconfig.weight,
debug=mlconfig.debug)
model = ServeModel(model)
if __name__ == "__main__":
from mlchain.rpc.server.flask_server import FlaskServer
FlaskServer(model).run(bind=['127.0.0.1:8004'], gunicorn=True)
Have cause the self.sess.run inside the model class to hang forever. While using mlchain run CLI doesn't.
Noted that this model use Tensorflow 1.14, 1.15 suffer the same problem
This DOES NOT effect production usage since we only use mlchain run but this bug is worth more examination.
One internal project of Techainer discover a bug that when using
python3 server.pywith a content like this:Have cause the
self.sess.runinside the model class to hang forever. While usingmlchain runCLI doesn't.Noted that this model use Tensorflow 1.14, 1.15 suffer the same problem
This DOES NOT effect production usage since we only use
mlchain runbut this bug is worth more examination.