TransWikia.com

adding a supervising process during knn process

Data Science Asked by lelorrain7 on October 1, 2021

I am trying to improve my KNN regression process (I would like to use sklearn / python, but it doesn’t matter).I would like to improve my results and to gain insight. Here is an example:

I have data measured from an electric motor: an input voltage (U) and current (I) and an output torque (T) and speed (S).

First intend is a simple approach where I’m giving those data in the state to a KNN algorithm and I use the results. But every result is not logical (even if they are statistically close).

if you add a knowledge layer during the process, a "human in the loop approach", it must be better! For example here, you know that the input power is Pin = UI and the output power is Pout = TS. The efficiency of the system eff = Pout / Pin and the efficiency cannot be higher than 1. Whereas KNN results can generate results with eff > 1.

My question is, how to use this knowledge (additional condition/human in the loop approach) during my KNN learning process phase to improve my results? Do I continue to learn on the initial data or transformed data? Do I modify the learning process? Is it possible to add "supervisor condition" that are influencing the learning process?

Thank you for your help!

One Answer

KNN Regressor, per se, does not have a dedicated learning process as it just takes weighted average of the dependent variable of the k-nearest neighbors from your training set of the test datapoint whose dependent variable you have to predict.

You can try using Support Vector Regressor or MLP (Neural Network) Regressor offered by scikit-learn and see if the performance has increased with respect to number of test cases where predictions resulted in efficiency > 1.

If you are adamant on incorporating this condition in your learning process, you can use Keras and create a custom loss function which returns a high value if your efficiency results in a value greater than 1. Although the training will be sensitive to this value and higher values can shutdown the neurons, resulting in all zeros output. But if you are careful you will be able to get sensible predictions with efficiency < 1.

def custom_loss(voltage, current, torque, speed):
    def loss(y_true, y_pred):
        #your logic here
    return loss

model = Sequential()
model.add(Dense(16, kernel_regularizer=regularizers.l2(0.001), activation='relu'))
model.add(Dense(1))
model.compile(loss=custom_loss(input_1, input_2, input_3, input_4), optimizer='adam')

Answered by Yash Jakhotiya on October 1, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP