首页 / 专利库 / 人工智能 / 人工神经网络 / 前馈神经网络 / 多层感知器 / Facet classification neural network

Facet classification neural network

阅读:885发布:2020-12-16

专利汇可以提供Facet classification neural network专利检索,专利查询,专利分析的服务。并且A classification neural network for piecewise linearly separating an input space to classify input patterns is described. The multilayered neural network comprises an input node, a plurality of difference nodes in a first layer, a minimum node, a plurality of perceptron nodes in a second layer and an output node. In operation, the input node broadcasts the input pattern to all of the difference nodes. The difference nodes, along with the minimum node, identify in which vornoi cell of the piecewise linear separation the input pattern lies. The difference node defining the vornoi cell localizes input pattern to a local coordinate space and sends it to a corresponding perceptron, which produces a class designator for the input pattern.,下面是Facet classification neural network专利的具体信息内容。

What is claimed is:1. A classification neural network for classifying input patterns, said classification neural network comprising:an input node for receiving said input patterns;a plurality of nodes connected to said input node for transforming said input patterns to localized domains defined by each node;a minimum node connected to said plurality of nodes for identifying a transforming node from said plurality of nodes;a plurality of perceptron nodes, each said perceptron node connected to a corresponding node from said plurality of nodes for producing class designators, wherein said transforming node transmits a signal to its corresponding perceptron node for producing a class designator; andan output node connected to said plurality of perceptron nodes for receiving said class designators from said plurality of perceptron nodes.2. The classification neural network according to claim 1, wherein each said node comprises:memory means for storing a reference vector;processing means for computing a vector, determining a magnitude of said vector and comparing said magnitude with magnitude values received from said minimum node.3. The classification neural network according to claim 2, wherein said minimum node identifies said transforming node by receiving said magnitudes of said vectors from said nodes, determining a minimum magnitude from among said magnitudes, and sending said minimum magnitude to said nodes.4. The classification neural network according to claim 2, wherein said memory means stores a first reference vector and a second reference vector and wherein the difference between said input pattern and said first reference vector comprises said vector.5. The classification neural network according to claim 4, wherein said transforming node sends a partitioning hypersurface origin vector to its corresponding perceptron node, said partitioning hypersurface origin vector being the difference between said input pattern and said second reference vector.6. The classification neural network according to claim 2, wherein said nodes transform said input pattern to localized domains by creating vornoi cells defined by said reference vector stored in each said node.7. The classification neural network according to claim 1, wherein said minimum node identifies said transforming node by identifying in which localized domain said input pattern lies.8. The classification neural network according to claim 1, wherein said perceptron node corresponding with said transforming node classifies said input pattern.9. The classification neural network according to claim 3, wherein said transforming node receives said minimum magnitude value from said minimum node, compares said received minimum magnitude value with said magnitude of said vector and sends said vector to its corresponding perceptron node which classifies said input pattern.10. The classification neural network according to claim 1, wherein said nodes transform said input pattern to localized domains by determining in which vornoi cells defined by each said node said input pattern lies.11. The classification neural network according to claim 10, wherein said perceptron nodes produce class designators by determining on which side of partitioning hypersurfaces within said vornoi cell said input pattern lies.12. The classification neural network according to claim 1, wherein said nodes further comprise node synaptic weight memory means for storing node synaptic weights and wherein said perceptron nodes further comprise perceptron node synaptic weight memory means for storing perceptron node synaptic weights.13. The classification neural network according to claim 12, further comprising:means for acquiring a plurality of sample points and correct classifications of said sample points;means for establishing a candidate post for each said sample point, said candidate post comprising said sample point, a nearest neighbor of opposite type for said sample point, a midpoint vector and a normal vector;means for adjusting each said midpoint vector to correctly classify a set of said sample points that may be classified by said candidate post associated with said midpoint vector;means for pruning the size of said network to establish a final post set; andmeans for assigning components of each said midpoint vector from said final post set as synaptic weights for a corresponding node and components of each said normal vector from said final post set as synaptic weights for a corresponding perceptron node.14. The classification neural network according to claim 13, wherein said set of sample points that may be classified by said candidate post comprises a set of sample points that are classified by said candidate post associated with said midpoint vector rather than the candidate post associated with said sample points.15. The classification neural network according to claim 14, wherein said means for adjusting each said midpoint vector comprises:means for defining a sphere centered at one-third the distance from said sample point to its nearest neighbor of opposite type and located therebetween, said sphere having a radius of a distance from its center to said sample point;means for projecting misclassified sample points within said sphere onto a line running between said sample point and said nearest neighbor of opposite type; andmeans for moving said midpoint vector to the midpoint between the most peripherally projected misclassified sample point within said sphere and said nearest neighbor of opposite type of said sample point.16. The classification neural network according to claim 13, wherein said set of sample points that may be classified by said candidate post comprises a set of sample points that are classified by said candidate post associated with said midpoint vector rather than the candidate post associated with said sample points and are incorrectly classifed by said candidate post associated with said midpoint vector.17. The classification neural network according to claim 13, wherein said set of sample points that may be classified by said candidate post comprises a set of sample points that are closer to said sample point of said candidate post than said nearest neighbor of opposite type.18. The classification neural network according to claim 17, wherein said means for adjusting each said midpoint vector comprises:means for defining a first sphere centered around said sample point, a radius of said first sphere being a distance from said sample point to its nearest neighbor of opposite type;means for defining a second sphere centered at said nearest neighbor of opposite type of said sample point, said second sphere having a radius of a distance from its center to its nearest neighbor of opposite type;means for identifying a third sphere from said first sphere and said second sphere, said third sphere having only one class of sample points within its boundaries;means for projecting misclassified sample points within said third sphere onto a partitioning hypersurface originating at said midpoint vector; andmeans for moving said midpoint vector to the midpoint between the most peripherally projected misclassified sample point and said nearest neighbor of opposite type of said sample point.19. The classification neural network according to claim 13, wherein said means for pruning the size of the network comprises:means for defining a popularity sphere centered at each said post midpoint, said popularity sphere having a popularity sphere radius equal to a distance from said post midpoint to a first sample point misclassified by said post midpoint;means for counting the number of correctly classified sample points within said popularity sphere;means for defining a proxy sphere centered at each said sample point, said proxy sphere having a proxy sphere radius equal to a distance from said sample point to a first post midpoint that misclassifies said sample point;means for identifying a proxy post for each said sample point, said proxy post being a candidate post within said proxy sphere of said sample point with the highest number of correctly classified sample points within its popularity sphere;means for eliminating candidate posts that are not proxy posts; andmeans for determining whether no said candidate posts are eliminated.20. The classification neural network according to claim 1, wherein said plurality of nodes comprise:first memory means for storing first weight factors;first adder means for subtracting said first weight factors from said input pattern to produce vectors;second memory means for storing said vectors;multiplier means for squaring components of said vectors; andsecond adder means for adding said squared components of said vectors to produce squared vector magnitudes.21. The classification neural network according to claim 1, wherein said minimum node comprises:memory means for storing squared vector magnitudes;first comparator means for determining a minimum of a first input and a second input of said first comparator means, said first input connected to said memory means and an output of said first comparator means connected to said second input;adder means for subtracting said vector magnitudes from said minimum of said first input and said second input;second comparator means for determining if said squared vector magnitude is a minimum magnitude;decision module for outputting an address location of said minimum magnitudes; andan address generator for outputting a maximum address location of received addresses.22. The classification neural network according to claim 1, wherein said perceptron nodes comprise:first memory means for storing first weight factors;multiplier means for multiplying a vector with said first weight factors to produce weighted vector components;adder means for adding said weighted vector components;second memory means for storing a threshold value;comparator means for comparing said sum of weighted vector components with said threshold value and outputting a first class designator if said sum of weighted vector components is greater than or equal to said threshold value and outputting a second class designator if said sum of weighted vector components is less than said threshold value.23. A classification neural network for classifying input patterns, said classification neural network comprising:an input node for receiving said input patterns;a plurality of first nodes connected to said input node for determining in which localized domain defined by each node each said input pattern lies;a plurality of second nodes connected to a corresponding first node from said plurality of first nodes for localizing each said input pattern to said domain defined by its corresponding first node;a minimum node connected to said plurality of first nodes for identifying a transforming first node from said plurality of first nodes;a plurality of perceptron nodes, each said perceptron node connected to a corresponding second node from said plurality of second nodes for producing class designators; andan output node connected to said plurality of perceptron nodes for receiving said class designators from said plurality of perceptron nodes.24. The classification neural network according to claim 23, wherein said plurality of first nodes further comprises:memory means for storing a first reference vector; andprocessing means for computing a vector, determining a magnitude of said vector, and comparing said magnitude with magnitude values received from said minimum node.25. The classification neural network according to claim 24, wherein said plurality of second nodes further comprises:memory means for storing a second reference vector; andprocessing means for computing a partitioning hypersurface origin vector.26. The classification neural network according to claim 23, wherein said perceptron node corresponding with said transforming first node classifies said input pattern.27. The classification neural network according to claim 23, wherein said transforming first node instructs its corresponding second node to send a localized input pattern vector to its corresponding perceptron node which classifies said input pattern.28. The classification neural network according to claim 23, wherein said first nodes transform said input pattern to localized domains by determining in which vornoi cells defined by each said first node said input pattern lies.29. The classification neural network according to claim 23, wherein said perceptron nodes produce class designators by determining on which side of partitioning hypersurfaces defined by said second nodes within said vornoi cell said input pattern lies.30. The classification neural network according to claim 23, wherein said first nodes transform said input pattern to localized domains by creating vornoi cells defined by a reference vector stored in each said first node.31. The classification neural network according to claim 23, wherein said first nodes further comprise first node synaptic weight memory means for storing first node synaptic weights, said second nodes further comprise second node synaptic weight memory means for storing second node synaptic weights and wherein said perceptron nodes further comprises perceptron node synaptic weight memory means for storing perceptron node synaptic weights.32. The classification neural network according to claim 31, further comprising:means for acquiring a plurality of sample points and correct classifications of said sample points;means for establishing a candidate post for each said sample point, said candidate post comprising said sample point, a nearest neighbor of opposite type for said sample point, a midpoint vector, a normal vector and a partitioning hyperplane origin vector;means for adjusting each said hyperplane origin vector to correctly classify a set of said sample points that may be classified by said candidate post associated with said midpoint vector;means for pruning the size of said network to establish a final post set; andmeans for assigning components of each said midpoint vector from said final post set as synaptic weights for a corresponding first node, components of each said partitioning hyperplane origin vector from said final post set as synaptic weights for a corresponding second node and components of each said normal vector from said final post set as synaptic weights for a corresponding perceptron node.33. The classification neural network according to claim 32, wherein said set of sample points that may be classified by said candidate post comprises a set of sample points that are classified by said candidate post associated with said midpoint vector rather than the candidate post associated with said sample points.34. The classification neural network according to claim 33, wherein said means for adjusting each said partitioning hyperplane vector comprises:means for defining a sphere centered at one-third the distance from said sample point to its nearest neighbor of opposite type and located therebetween, said sphere having a radius of a distance from its center to said sample point;means for projecting misclassified sample points within said sphere onto a line running between said sample point and said nearest neighbor of opposite type; andmeans for moving said partitioning hyperplane vector to the midpoint between the most peripherally projected misclassified sample point within said sphere and said nearest neighbor of opposite type of said sample point.35. The classification neural network according to claim 32, wherein said set of sample points that may be classified by said candidate post comprises a set of sample points that are classified by said candidate post associated with said midpoint vector rather than the candidate post associated with said sample points and are incorrectly classifed by said candidate post associated with said midpoint vector.36. The classification neural network according to claim 32, wherein said set of sample points that may be classified by said candidate post comprises a set of sample points that are closer to said sample point of said candidate post than said nearest neighbor of opposite type.37. The classification neural network according to claim 32, wherein said means for adjusting each said partitioning hyperplane origin vector comprises:means for defining a first sphere centered around said sample point, a radius of said first sphere being a distance from said sample point to its nearest neighbor of opposite type;means for defining a second sphere centered at said nearest neighbor of opposite type of said sample point, said second sphere having a radius of a distance from its center to its nearest neighbor of opposite type;means for identifying a third sphere from said first sphere and said second sphere, said third sphere having only one class of sample points within its boundaries;means for projecting misclassified sample points within said third sphere onto a partitioning hypersurface originating at said midpoint vector; andmeans for moving said partitioning hyperplane origin vector to the midpoint between the most peripherally projected misclassified sample point and said nearest neighbor of opposite type of said sample point.38. The classification neural network according to claim 32, wherein said means for pruning the size of the network comprises:means for defining a popularity sphere centered at each said post midpoint, said popularity sphere having a popularity sphere radius equal to a distance from said post midpoint to a first sample point misclassified by said post midpoint;means for counting the number of correctly classified sample points within said popularity sphere;means for defining a proxy sphere centered at each said sample point, said proxy sphere having a proxy sphere radius equal to a distance from said sample point to a first post midpoint that misclassifies said sample point;means for identifying a proxy post for each said sample point, said proxy post being a candidate post within said proxy sphere of said sample point with the highest number of correctly classified sample points within its popularity sphere;means for eliminating candidate posts that are not proxy posts; andmeans for determining whether no said candidate posts are eliminated.39. A method of classifying an input pattern in a neural network system, said system comprising a plurality of difference nodes for transforming said input pattern to localized domains defined by each difference node, a plurality of perceptron nodes, each said perceptron node connected to a corresponding difference node from said plurality of difference nodes and an output node connected to said plurality of perceptron nodes, said method comprising the steps of:a) broadcasting said input pattern to said plurality of difference nodes;b) computing a difference between said input pattern and a reference vector at each said difference node, said difference being a difference vector;c) identifying a transforming difference node from among said plurality of difference nodes, said transforming difference node representing a localized domain in which said input pattern lies;d) sending a localized vector from said transforming difference node to a corresponding perceptron node from said plurality of perceptron nodes; ande) producing a class designator from said localized vector at said corresponding perceptron node.40. The method of classifying an input pattern in a neural network system according to claim 39, said neural network further comprising a minimum node connected to said difference nodes, wherein said step of identifying said transforming difference node comprises the steps of:a) computing a magnitude of each said difference vector at each said difference node;b) sending said magnitude of each said difference vector to said minimum node;c) determining a minimum magnitude of each said magnitudes of said difference vectors at said minimum node;d) broadcasting said said minimum magnitude to said plurality of difference nodes;e) comparing said magnitude of each said difference vector with said minimum magnitude; andf) designating said difference node with said magnitude of said difference vector matching said minimum magnitude as said transforming difference node.41. The method of classifying an input pattern in a neural network system according to claim 39, wherein said localized vector is said difference vector.42. The method of classifying an input pattern in a neural network system according to claim 39, further comprising the step of computing a second difference vector between said input pattern and a second reference vector at each said difference node, said difference being a partitioning hypersurface origin vector.43. The method of classifying an input pattern in a neural network system according to claim 39, wherein said localized vector is said partitioning hypersurface origin vector.44. A method of classifying an input pattern in a neural network system, said system comprising a plurality of nodes for transforming said input pattern to localized domains defined by each node, and a plurality of perceptron nodes, each said perceptron node connected to a corresponding node from said plurality of nodes, said method comprising the steps of:a) defining vornoi cells as said localized domains for said nodes;b) determining in which vornoi cell said input pattern lies;c) transforming said input pattern to said localized domain of said vornoi cell in which said input pattern lies; andd) classifying said localized input pattern at said perceptron node corresponding to said node.45. The method of classifying an input pattern in a neural network system according to claim 44, wherein determining in which vornoi cell said input pattern lies comprises determining a minimum distance between a vornoi cell generator and said input pattern, said input pattern lying within said vornoi cell generated by said vornoi cell generator.46. The method of classifying an input pattern in a neural network system according to claim 44, wherein said network further comprises a minimum node and wherein determining in which vornoi cell said input pattern lies comprises computing a vector, said vector being a difference between said input pattern and a first reference vector stored in said node, determining the magnitude of said vector and comparing said magnitude with values received from said minimum node.47. The method of classifying an input pattern in a neural network system according to claim 44, wherein transforming said input pattern to said localized domain comprises computing a localized vector, said localized vector being a difference between said input pattern and a second reference vector stored in said node.48. The method of classifying an input pattern in a neural network system according to claim 44, wherein classifying said localized input pattern comprises determining on which side of a partitioning hyperplane said localized input pattern lies.49. A method of producing weight factors and modifying a size of a neural network, said neural network comprising a plurality of nodes for transforming said input pattern to localized domains defined by each node, a plurality of perceptron nodes, each said perceptron node connected to a corresponding node from said plurality of nodes and an output node connected to said plurality of perceptron nodes, said method comprising the steps of:a) acquiring a plurality of sample points and correct classifications of said sample points;b) establishing a candidate post for each said sample point, said candidate post comprising said sample point, a nearest neighbor of opposite type for said sample point, a midpoint vector and a normal vector;c) adjusting each said midpoint vector to correctly classify a set of said sample points that may be classified by said candidate post associated with said midpoint vector;d) pruning the size of said network to establish a final post set; ande) assigning components of each said midpoint vector from said final post set as synaptic weights for a corresponding node and components of each said normal vector from said final post set as synaptic weights for a corresponding perceptron node.50. The method according to claim 49, wherein said set of sample points that may be classified by said candidate post comprises a set of sample points that are classified by said candidate post associated with said midpoint vector rather than the candidate post associated with said sample points.51. The method according to claim 49, wherein said set of sample points that may be classified by said candidate post comprises a set of sample points that are classified by said candidate post associated with said midpoint vector rather than the candidate post associated with said sample points and are incorrectly classifed by said candidate post associated with said midpoint vector.52. The method according to claim 49, wherein said set of sample points that may be classified by said candidate post comprises a set of sample points that are closer to said sample point of said candidate post than said nearest neighbor of opposite type.53. The method according to claim 49, wherein pruning the size of the network comprises the steps of:a) defining a popularity sphere centered at each said post midpoint, said popularity sphere having a popularity sphere radius equal to a distance from said post midpoint to a first sample point misclassified by said post midpoint;b) counting the number of correctly classified sample points within said poplularity sphere;c) defining a proxy sphere centered at each said sample point, said proxy sphere having a proxy sphere radius equal to a distance from said sample point to a first post midpoint that misclassifies said sample point;d) identifying a proxy post for each said sample point, said proxy post being a candidate post within said proxy sphere of said sample point with the highest number of correctly classified sample points within its popularity sphere;e) eliminating candidate posts that are not proxy posts; andf) repeating steps a) through e) until no said candidate posts are eliminated.

说明书全文
高效检索全球专利

专利汇是专利免费检索,专利查询,专利分析-国家发明专利查询检索分析平台,是提供专利分析,专利查询,专利检索等数据服务功能的知识产权数据服务商。

我们的产品包含105个国家的1.26亿组数据,免费查、免费专利分析。

申请试用

分析报告

专利汇分析报告产品可以对行业情报数据进行梳理分析,涉及维度包括行业专利基本状况分析、地域分析、技术分析、发明人分析、申请人分析、专利权人分析、失效分析、核心专利分析、法律分析、研发重点分析、企业专利处境分析、技术处境分析、专利寿命分析、企业定位分析、引证分析等超过60个分析角度,系统通过AI智能系统对图表进行解读,只需1分钟,一键生成行业专利分析报告。

申请试用

QQ群二维码
意见反馈