How to merge multiple input and embeddings into single input layer

I have various inputs, some that need embedding. I have been able to create them all as seen below:

enter image description here

I can then concatenate them all, for the following:

enter image description here

However, my disconnect is where to go from here. I have built the following autoencoder, but I am not sure how to “stack” the previous embedding+input mix on top of this flow:

enter image description here

So, how do I make the input layer whats already been defined above? I tried setting the first “encoder” part to take in merge_models, but it fails:

enter image description here

Code is the following:

num_input = Input(shape=scaled_data.shape[1], name='input_number_features')
models.append(num_input)
inputs.append(num_input)  

binary_input = Input(shape=binary_data.shape[1], name='input_binary_features')
models.append(binary_input)
inputs.append(binary_input)  
  
for var in cols_to_embed :
    model = Sequential()
    no_of_unique_cat  = data[var].nunique()
    embedding_size = np.ceil(np.sqrt(no_of_unique_cat))
    embedding_size = int(embedding_size)
    print(var + " - " + str(no_of_unique_cat) + ' unique values to ' + str(embedding_size))
    inpt = tf.keras.layers.Input(shape=(1,),
                                 name='input_' + '_'.join(
                                 var.split(' ')))
    embed = tf.keras.layers.Embedding(no_of_unique_cat, embedding_size,trainable=True,
                                      embeddings_initializer=tf.initializers
                                      .random_normal)(inpt)
    embed_rehsaped = tf.keras.layers.Reshape(target_shape=(embedding_size,))(embed)
    models.append(embed_rehsaped)
    inputs.append(inpt)

merge_models = tf.keras.layers.concatenate(models)

# Input Layer
input_dim = merge_models.shape[1]
input_layer = Input(shape = (input_dim, ), name = 'input_layer')

# Encoder
encoder = Dense(16, activation='relu')(input_layer)
encoder = Dense(8, activation='relu')(encoder)
encoder = Dense(4, activation='relu')(encoder)

# Bottleneck
z = Dense(2, activation='relu')(encoder)

# Decoder
decoder = Dense(4, activation='relu')(z)
decoder = Dense(8, activation='relu')(decoder)
decoder = Dense(16, activation='relu')(decoder)
decoder = Dense(input_dim, activation='elu')(decoder) # intentionally using 'elu' instead of 'reul'

# Autoencoder
from tensorflow.keras.models import Model
autoencoder = Model(inputs = input_layer, 
                    outputs = decoder,
                    name = 'ae_toy_example')

Answer

You should pass merge_models into the first encoder layer in this way:

encoder = Dense(16, activation='relu')(merge_models)

then you should define your final model in this way:

Model(inputs = inputs, outputs = decoder, name = 'ae_toy_example')

and NOT as:

Model(inputs = input_layer, outputs = decoder, name = 'ae_toy_example')