I am following the tutorial but failed to build a linear regressor for a dataset generated on top of y=x
. Here is the last part of my code, and you can find the complete source code here if you want to reproduce my error:
_CSV_COLUMN_DEFAULTS = [[0],[0]]
_CSV_COLUMNS = ['x', 'y']
def input_fn(data_file):
def parse_csv(value):
print('Parsing', data_file)
columns = tf.decode_csv(value, record_defaults=_CSV_COLUMN_DEFAULTS)
features = dict(zip(_CSV_COLUMNS, columns))
labels = features.pop('y')
return features, labels
# Extract lines from input files using the Dataset API.
dataset = tf.data.TextLineDataset(data_file)
dataset = dataset.map(parse_csv)
iterator = dataset.make_one_shot_iterator()
features, labels = iterator.get_next()
return features, labels
x = tf.feature_column.numeric_column('x')
base_columns = [x]
model_dir = tempfile.mkdtemp()
model = tf.estimator.LinearRegressor(model_dir=model_dir, feature_columns=base_columns)
model = model.train(input_fn=lambda: input_fn(data_file=file_path))
Somehow this code will fail with error message
ValueError: Feature (key: x) cannot have rank 0. Give: Tensor("IteratorGetNext:0", shape=(), dtype=int32, device=/device:CPU:0)
Due to the nature of tensorflow, I found it a bit hard to inspect where it really went wrong based on the given message.
As far as I can tell, the first dimension of the values is meant to be the batch_size
. So when input_fn
returns the data, it should return data as a batch.
It works once you return the data as a batch, e.g.:
dataset = tf.data.TextLineDataset(data_file)
dataset = dataset.map(parse_csv)
dataset = dataset.batch(10) # or any other batch size
Feature cannot have rank 0 issue occurs when we dont specify batch_size with input_fn or eval_fn or predict_fn with estimator api, below code will show how shape of tensor changes with batch_size. This code will work with TF2.0, for running this on earlier version enable eager execution (tf.enable_eager_execution()). In the below two code segment notice how shape of output tesnor changes with batch_size and without batch_size.
##### content of test.csv ####
feature1, feature2,label
234, 235, 24
345, 345,26
234, 345, 28
432, 567, 29
########################
import tensorflow as tf
tf.enable_eager_execution()
CSV_COLUMNS= ['feature1','feature2','label']
CSV_COLUMN_DEFAULTS = [[0.0], [0.0],[0.0]]
def parse_csv(value):
columns = tf.decode_csv(value, record_defaults=CSV_COLUMN_DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
labels = features.pop('label')
return features, labels
###Without batch size
dataset = tf.data.TextLineDataset(filenames='./test.csv').skip(count = 1)
dataset = dataset.map(parse_csv)
for i in dataset:
print(i)
# Output tensor shape here is shape=()
({'feature1': <tf.Tensor: id=247, shape=(), dtype=float32, numpy=234.0>, 'feature2': <tf.Tensor: id=248, shape=(), dtype=float32, numpy=235.0>}, <tf.Tensor: id=249, shape=(), dtype=float32, numpy=24.0>)
({'feature1': <tf.Tensor: id=253, shape=(), dtype=float32, numpy=345.0>, 'feature2': <tf.Tensor: id=254, shape=(), dtype=float32, numpy=345.0>}, <tf.Tensor: id=255, shape=(), dtype=float32, numpy=26.0>)
({'feature1': <tf.Tensor: id=259, shape=(), dtype=float32, numpy=234.0>, 'feature2': <tf.Tensor: id=260, shape=(), dtype=float32, numpy=345.0>}, <tf.Tensor: id=261, shape=(), dtype=float32, numpy=28.0>)
({'feature1': <tf.Tensor: id=265, shape=(), dtype=float32, numpy=432.0>, 'feature2': <tf.Tensor: id=266, shape=(), dtype=float32, numpy=567.0>}, <tf.Tensor: id=267, shape=(), dtype=float32, numpy=29.0>)
###With batch size
dataset = tf.data.TextLineDataset(filenames='./test.csv').skip(count = 1)
dataset = dataset.map(parse_csv).batch(batch_size=2)
for i in dataset:
print(i)
# Output tensor shape here is shape=(2,)
({'feature1': <tf.Tensor: id=442, shape=(2,), dtype=float32, numpy=array([234., 345.], dtype=float32)>, 'feature2': <tf.Tensor: id=443, shape=(2,), dtype=float32, numpy=array([235., 345.], dtype=float32)>}, <tf.Tensor: id=444, shape=(2,), dtype=float32, numpy=array([24., 26.], dtype=float32)>)
({'feature1': <tf.Tensor: id=448, shape=(2,), dtype=float32, numpy=array([234., 432.], dtype=float32)>, 'feature2': <tf.Tensor: id=449, shape=(2,), dtype=float32, numpy=array([345., 567.], dtype=float32)>}, <tf.Tensor: id=450, shape=(2,), dtype=float32, numpy=array([28., 29.], dtype=float32)>)
Using batch_size with dataset would solve the issue "Feature cannot have rank 0".
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With