Inspired by human driving focus, this research pioneers networks augmented with Focusing Sampling, Partial Field of View Evaluation, Enhanced FPN architecture and Directional IoU Loss - targeted innovations addressing obstacles to precise lane detection for autonomous driving. Experiments demonstrate our Focusing Sampling strategy, emphasizing vital distant details unlike uniform approaches, significantly boosts both benchmark and practical curved/distant lane recognition accuracy essential for safety. While FENetV1 achieves state-of-the-art conventional metric performance via enhancements isolating perspective-aware contexts mimicking driver vision, FENetV2 proves most reliable on the proposed Partial Field analysis. Hence we specifically recommend V2 for practical lane navigation despite fractional degradation on standard entire-image measures. Future directions include collecting on-road data and integrating complementary dual frameworks to further breakthroughs guided by human perception principles. The Code is available at https://github.com/HanyangZhong/FENet.
翻译:受人类驾驶注意力的启发,本研究率先探索了结合聚焦采样、部分视野评估、增强型FPN架构以及方向性IoU损失的网络——这些针对性创新解决了自动驾驶精确车道检测中的障碍。实验表明,我们的聚焦采样策略强调关键远距离细节,与均匀采样方法不同,显著提升了基准测试及实际场景中弯道/远距离车道识别的准确性,这对安全至关重要。FENetV1通过增强孤立视角感知上下文(模拟驾驶员视野),实现了传统指标上的最先进性能;而FENetV2在所提出的部分视野分析中证明了其最可靠性。因此,我们特别推荐V2用于实际车道导航,尽管其在标准全图像度量上略有下降。未来方向包括收集道路数据并整合互补双框架,以进一步受人类感知原理指导实现突破。代码详见https://github.com/HanyangZhong/FENet。