If u are machine learning expert then please help me


from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(x_train , y_train)
regressor.predict(x_test)

Here how fit works , does it use gradient descent ?
I know the purpose of the fit is to find best fit line , but how it finds it ?
does it only use the simple formula y=mx+c to find y(guess) or does it optimizes also

I also know that fit is used in feature scaling and its purpose is to find the standard deviation and mean and through transform function the following data are scaled using that calculated mean and SD.

But i am unable to understand how fit and predict work in case of Linear Regression .
so please share ur view .

If you tried to google the same question “does sklearn linear regression uses gradient descent” the first link would be What method does scikit learn use for linear regression? | Data Science and Machine Learning | Kaggle, from there you can know that it uses least-square solver(scipy.linalg.lstsq — SciPy v1.10.1 Manual), most probably due to the simplicity of linear regression, full SGD is not required.

so if we use fit of Linear_model , it is equivalent of using gradient descent directly? if so which type of gradient descent it uses stochastic or batch?

as gradient descent involves feature scaling to avoid complex trajectory so through fit we are indirectly doing feature scaling also?

if this is the case then in some Linear regression feature scaling is not necessarily needed as results are same so using fit will be valuable in that case?

Is fit used in preprocessing steps and linear_model same? if not same then why they are named same?

how does predict work , is it same as transform function?

these are some confusion in my mind which am not sure so pls help