Combinatorial Threshold-Linear Networks (CTLNs) are a neural network model that is used to simulate the firing rates of neurons. This model is based on a system of differential equations that compute the firing rates of each neuron in the system. In particular, we are interested in a special family of CTLNs called core motifs. Identifying these core motifs is integral to extrapolating CTLN findings to larger networks like the brain, but checking if CTLNs are core is computationally complex and difficult to scale. In an effort to more easily identify core motifs, I formed two conjectures that rule out large numbers of CTLNs as not core using simpler computations. These two conjectures use determinant sign and out-degree uniformity respectively to drastically reduce the computations required to find core motifs, allowing for greater scalability and computability.