These days I am learning SQLAlchemy. When I want to load an object from json and save it to MySQL, things get difficult because the fields in my model are more that 20 and I wonder whether there're better ways to do this.
My original code follows as an example:
class User(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
name = Column(String)
json_str = """{"id": 1, "name": "yetship"}"""
obj = json.loads(json_str)
user = User(id=obj.get("id"), name=obj.get("name"))
It can work but as I add more fields, it gets awful.
If you have in your json file only fields that you can initialize your User
from. Then you simply can do.
user = User(**obj)
**obj
will unpack your dict
object, so if have obj = {'id': 1, 'name': 'Awesome'}
, User(**obj)
will do like User(id=1, name='Awesome')
You can see the docs
NOTE: it's better to avoid using id
as a variable|field, because 'id' in dir(__builtin__)
, so id
in bultins
UPD
If you have in your json fields that don't belong to that model you can filter them out with dict comprehension
user = User(**{k:v for k, v in obj.items() if k in {'id', 'name'}})
UPD for UPD
It depends on the size of you obj
, in most of the cases it's probably better (performance and clarity) to iterate over ('id', 'name')
rather than obj.items()
:
user = User(**{k: obj[k] for k in ('id', 'name') if k in obj})
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With