I'm using github.com/jackc/pgx
for work with postgreSQL.
Noq I want to convert pgx.Rows from Query() to json array.
I tried func for *sql.Rows
, but it doesn't work for *pgx.Rows
func PgSqlRowsToJson(rows *pgx.Rows) []byte {
fieldDescriptions := rows.FieldDescriptions()
var columns []string
for _, col := range fieldDescriptions {
columns = append(columns, col.Name)
}
count := len(columns)
tableData := make([]map[string]interface{}, 0)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
entry[col] = v
}
tableData = append(tableData, entry)
}
jsonData, _ := json.Marshal(tableData)
return jsonData
}
The problem is that Scan()
doesn't work with interface{}
and it works with explicitly defined types only.
Can you help me how to fix it?
I’ve hit this a few times and had to remind myself how Postgres behaves when aggregating JSON columns or including them in ROW_TO_JSON. The short answer is that Postgres does the right thing.
It appears that you have an array of JSON values, and want to turn that into a single JSONB value which is an array. You can unnest your array, then cast the elements to JSONB and aggregate that back using jsonb_agg (): select jsonb_agg (j::jsonb) from unnest (ARRAY [' {"id": "1"}', ' {"id": "2"}']) as x (j);
The JSON_ARRAY function uses the rules described in this FOR JSON article to convert SQL data types to JSON types in the JSON array output. The JSON_ARRAY function escapes special characters and represents control characters in the JSON output as described in this FOR JSON article.
Let’s try ROW_TO_JSON first. It’s easy to think that each row would be returned with the config columns still JSON encoded, meaning we’d have to do something like this: That’s not the case. PostgreSQL does not nest encoded JSON, it instead puts everything into one level of JSON encoding. The same is true for JSON_AGG.
You can use the pgx.FieldDescription
's Type
method to retrieve a column's expected type. Passing that to reflect.New
you can then allocate a pointer to a value of that type, and with these newly allocated values you can then make a slice of non-nil interface{}s
whose underlying values have the expected type.
For example:
func PgSqlRowsToJson(rows *pgx.Rows) []byte {
fieldDescriptions := rows.FieldDescriptions()
var columns []string
for _, col := range fieldDescriptions {
columns = append(columns, col.Name)
}
count := len(columns)
tableData := make([]map[string]interface{}, 0)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = reflect.New(fieldDescriptions[i].Type()).Interface() // allocate pointer to type
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := reflect.ValueOf(valuePtrs[i]).Elem().Interface() // dereference pointer
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
entry[col] = v
}
tableData = append(tableData, entry)
}
jsonData, _ := json.Marshal(tableData)
return jsonData
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With