"It’s Just a Generic AI Applied to X": Overcoming Obviousness Rejections with Data Preprocessing
The following content is for those who are not using Patenty.ai.
Introduction: The "Routine Optimization" Rejection
If you prosecute AI patents, you know the Examiner’s favorite template.
Whether it’s a Non-Final Office Action from the USPTO or an Examination Report from the EPO, the logic is often identical:
"The cited reference teaches the problem domain. A second reference teaches a Neural Network (CNN/RNN/Transformer). It would have been obvious to a Person Having Ordinary Skill in the Art (PHOSITA) to apply the known AI model to the known problem to automate the process."
(Cited as: 35 U.S.C. § 103 or Lack of Inventive Step)
It is frustrating. Your client spent months collecting data and fine-tuning the model to handle edge cases, but the Examiner dismisses it as a "predictable application of known tools" (citing KSR v. Teleflex in the US) or a "mathematical method with no further technical effect" (in Europe).
However, fighting on the "Novelty of the Model Architecture" is a losing battle. ResNet and BERT are prior art. To win, we must shift the battlefield from the Algorithm to the Data Pipeline.
Here is a strategic framework to overcome "Simple Application of AI" rejections by leveraging Data Preprocessing and Hyperparameter constraints.
When the Examiner argues that the Model is generic, agree with them. Then, pivot to the Input Data.
In both US and EP practice, transforming data to make it suitable for machine learning is often considered a technical feature that confers an inventive step.
✅ The Argument: Solving "Garbage In, Garbage Out"
Argue that applying a generic model to raw data would result in failure (overfitting or non-convergence). The invention lies in the specific preprocessing that enables the model to learn.
2. Hyperparameter Constraints: "Criticality" over "Arbitrary Choice"
Simply claiming "a learning rate of 0.001" will be rejected as an arbitrary design choice (result-effective variable). You must frame parameters as Structural Limitations or Critical Constraints.
✅ The Argument: Optimization Difficulty
Show that the specific parameter is not a routine choice but a solution to a specific technical hurdle.
3. The Non-Obviousness of Correlation
Examiners often assume that if Data A exists and Event B happens, feeding A into an AI to predict B is obvious.
Challenge this assumption. The discovery of the correlation itself can be the invention.
[Logic Construction]
Problem: PHOSITA would not expect that "Motor Vibration Data" (Input A) could predict "Machine Failure 3 days later" (Output B) due to high noise levels.
Solution: The Applicant transformed Input A into the Frequency Domain (FFT) and fed it into an LSTM.
Conclusion: The invention is not the use of LSTM. The invention is identifying that Frequency Domain Data contains the latent features necessary for prediction, establishing a non-obvious technical causal link.
4. Actionable Tips for Prosecution
To support these arguments, you need to prepare the "ammunition" during the drafting stage.
Tip 1. The "Comparative Graph" is Mandatory
In US practice, Secondary Considerations (Objective Indicia of Non-Obviousness) are powerful. In EP practice, showing a Technical Effect is mandatory.
Tip 2. Layer Your Dependent Claims
Keep the independent claim broad, but ensure you have dependent claims specifically reciting:
Specific Preprocessing Steps (Normalization, Augmentation, FFT).
Loss Function Formulas.
Data Structure constraints.
Conclusion: It's Not the Engine, It's the Tuning
When an Examiner says, "It’s just AI," our response should be:
"Two cars may have the same engine (Model), but the fuel (Data) and the tuning (Preprocessing) determine whether the car wins the race."
The "Inventive Step" in modern AI patents is rarely found in the neural network architecture itself. It is hidden in the data engineering and the optimization constraints.
Next time you face a "Routine Optimization" rejection, try shifting the focus to the Input Pipeline. You might find the Examiner has run out of ammunition.
📝 Summary Checklist
[ ] Did you shift the argument from the "Model" to the "Data Preprocessing"?
[ ] Did you justify hyperparameters using "Criticality" (US) or "Purposive Selection" (EP)?
[ ] Are Custom Loss Functions or specific Training Strategies (e.g., freezing layers) included in dependent claims?
[ ] Do you have Comparative Data in the specification to prove "Unexpected Results" or "Technical Effect"?