Entropy Scanner scans the stocks in 3BB Scanner with few additional parameters like –

  • Weekend Strategy Validity in 15 Minutes and 5 Minutes Timeframe (Case Basis)
  • Narrow Range (Compression i.e Inside Bars and modified NR Theory)
  • Bollinger Width
  • Proper 3BB Pattern Formation
  • Candlestick Type in some cases
  • Dynamic Entry: The entry parameters are dynamic based on market conditions. For Example, in a consolidating market, We need to loosen up on the validity of weekend logics while it is vice-versa for the case of the market at the time of results season because it becomes highly volatile.

The scanner provides both trade entries and a pre-listed “Watchlist” of potential trade entries.

Although, initially the scanner was based on price action logics, later it pivoted to a machine learning model named K-Nearest Neighbors (KNN) Model once we ample training data.

Scanner Data -

Pricing: The Entropy scanner is paid.

How to Access: Once You pay for it, You will get an email with complete instruction on how access the private channels in our communities.

Scanner History -

You can check the last 80 scans/triggers here. (It gets updated daily EOD) – 

We have also added a few more filters and conditions to minimize false breakouts. It also checks an immediate daily trends to ensure more profitability.

How to Trade -

Entry: Any moment a stock comes into this scanner, one can immediately put short trade at the low of the previous candle’s low in 15 minutes timeframe. Once triggered, Stop Loss: The Stop Loss should be updated to the high of that very day. Exit Strategy: There are several exit strategies.
  • Exit when price cross-median of 1SD Bollinger band on 15 minutes time frame.
  • Exit after 30 minutes irrespective of the Profit/Loss.
  • Let it auto square off at day’s end! (Recommended)
  • TSL with Weekend or Bounce Strategy!

KNN Model -

K-Nearest Neighbors (KNN) is a simple yet effective machine learning algorithm used for classification and regression tasks. The algorithm is based on the concept that similar data points are likely to have similar outcomes.

In KNN, the model finds the K-nearest data points to a new data point and assigns a label to the new point based on the labels of its K-nearest neighbors. The value of K is a hyperparameter that can be tuned to improve the model’s accuracy.

For example, if we have a dataset of stocks and we want to predict whether a particular stock will go up or down, we can use KNN. We would first preprocess the data and extract the relevant features such as the stock’s price, volume, etc. We would then split the dataset into training and testing sets.

To make a prediction for a new stock, we would find the K-nearest stocks to the new stock based on the extracted features. We would then assign a label to the new stock based on the labels of its K-nearest neighbors. For instance, if the majority of its K-nearest neighbors have gone up in the past, we would predict that the new stock will also go up.

KNN is easy to implement and can work well with small datasets. However, it can become computationally expensive for large datasets as the algorithm has to calculate the distance between each data point. In addition, choosing the value of K can be challenging, and the algorithm can be sensitive to the scale of the data.