Bootstrap

(9-4-01)基于感知轨迹预测模型(BAT)的目标行为预测系统:工具集(1)

本项目独立实现了一个多模态轨迹预测系统,使用深度学习模型对车辆行为进行建模,打造了一个自己实现的多模态感知轨迹预测模型。通过考虑车辆历史轨迹、周围车辆信息和场景上下文,系统能够准确预测未来车辆运动。在项目中实现了一个独有知识产权的、复杂的神经网络结构:BAT,包括轨迹转换和概率密度函数估计。通过多任务学习,模型能够处理不同粒度和多个输出维度的预测任务,提高了对多样化驾驶场景的适应能力。

9.4.1  工具集

编写文件utils.py,这是一个包含用于轨迹预测模型的实用功能的文件。实现了负对数似然(NLL)和均方误差(MSE)损失的计算,支持掩码以处理可变输出长度,并提供了用于测试的相应损失计算。此外,还包括了一些辅助函数,用于处理轨迹数据的预处理和数据加载。整体而言,utils.py 提供了在轨迹预测任务中常用的损失计算和数据处理功能。文件utils.py的具体实现流程如下所示。

(1)定义类 ngsimDataset,该类继承自 torch.utils.data.Dataset,用于处理轨迹数据集。

class ngsimDataset(Dataset):
    def __init__(self, mat_file, t_h=args['t_hist'], t_f=args['t_fut'], d_s=args['skip_factor'],
                 enc_size=args['encoder_size'], grid_size=args['grid_size'], n_lat=args['num_lat_classes'],
                 n_lon=args['num_lon_classes'], input_dim=args['input_dim'], polar=args['pooling'] == 'polar'):
 
        # 从提供的 mat_file 中加载轨迹数据和轨迹信息
        self.D = scp.loadmat(mat_file)['traj']
        self.T = scp.loadmat(mat_file)['tracks']
 
        # 设置数据集的各种参数
        self.t_h = t_h  # 轨迹历史长度
        self.t_f = t_f  # 预测轨迹长度
        self.d_s = d_s  # 所有序列的下采样率
        self.enc_size = enc_size  # 编码器 LSTM 的大小
        self.grid_size = grid_size  # 网格的大小
        self.n_lat = n_lat  # 横向类别数
        self.n_lon = n_lon  # 纵向类别数
        self.polar = polar  # 池化策略(极坐标或其他)
        self.input_dim = input_dim - 1  # 输入维度
 
    def __len__(self):
        # 返回数据集中的样本总数
        return len(self.D)
 
    def __getitem__(self, idx):
        # 使用索引获取数据集中的特定样本
        dsId = self.D[idx, 0].astype(int)  # 数据集 ID
        vehId = self.D[idx, 1].astype(int)  # 车辆 ID
        t = self.D[idx, 2]  # 时间
        grid = self.D[idx, 8:]  # 样本的网格信息
        neighbors = []  # 用于存储邻近车辆信息的列表
        radius = 32.8  # 定义邻近车辆的半径
        hist = self.getHistory(vehId, t, vehId, dsId,
                               nbr_flag=False)  # 获取历史轨迹
 
        fut = self.getFuture(vehId, t, dsId, nbr_flag=False)  # 获取未来轨迹
        
        # 获取邻近车辆的历史轨迹
        for i in grid:  
            neighbors.append(self.getHistory(
                i.astype(int), t, vehId, dsId, nbr_flag=True))
 
        # 获取样本的邻接矩阵和中心性度量
        frame_ID_adj_mat_list, closeness_list, degree_list, eigenvector_list = self.get_all_adjancent_matrix_and_centrality(vehId, t, dsId, grid, radius)
     
        # 计算邻接矩阵和中心性度量的均值
        all_adjancent_matrix_mean, all_closeness_mean, all_degree_mean, all_eigenvector_mean = self.rate(
             frame_ID_adj_mat_list, closeness_list, degree_list, eigenvector_list)
        lon_enc = np.zeros([self.n_lon])
        lon_enc[int(self.D[idx, 7] - 1)] = 1
 
        lat_enc = np.zeros([self.n_lat])
        lat_enc[int(self.D[idx, 6] - 1)] = 1
 
        return hist, fut, neighbors, lat_enc, lon_enc, dsId, vehId, t,\
            torch.Tensor(all_adjancent_matrix_mean),  torch.Tensor(closeness_list),\
                torch.Tensor(degree_list), torch.Tensor(eigenvector_list), torch.Tensor(all_closeness_mean), torch.Tensor(all_degree_mean), torch.Tensor(all_eigenvector_mean)

对上述代码的具体说明如下所示:

  1. __init__():初始化方法。该方法用于初始化数据集,读取轨迹数据等。参数包括轨迹文件路径 mat_file,以及一些超参数如 t_h(轨迹历史长度)、t_f(预测轨迹长度)、d_s(下采样率)、enc_size(编码器 LSTM 大小)等。
  2. __len__(self) :返回数据集的大小。
  3. __getitem__(self, idx) :根据给定索引 idx 返回数据集中的一个样本。在这个方法中,首先提取了轨迹的一些基本信息,然后调用了一系列的方法,包括 getHistory 和 getFuture,以获取车辆历史和预测轨迹。

(2)定义函数get_coordinates_1,功能是根据车辆 ID、时间和数据集 ID 获取车辆坐标,返回坐标列表 [x, y],如果未找到则返回 [NaN, NaN]。

    def get_coordinates_1(self, vehId, t, dsId): 
        if vehId == 0:
            nothing=[np.nan,np.nan]
            return nothing
        
            #else:
        if self.T.shape[1] <= vehId - 1:  
            nothing=[np.nan,np.nan]
            return nothing
       
        else:
            vehTrack = self.T[dsId - 1][vehId - 1].transpose()
            if vehTrack.size == 0 or np.argwhere(vehTrack[:, 0] == t).size == 0:
                nothing=[np.nan,np.nan]
                return nothing
            else:
                x = vehTrack[np.where(vehTrack[:, 0] == t)][0, 1]
                y = vehTrack[np.where(vehTrack[:, 0] == t)][0, 2]
            return [x, y]

(3)定义函数gget_coordinates,功能是根据车辆 ID、时间和数据集 ID 获取车辆坐标,返回坐标列表 [x, y],如果未找到则返回 [NaN, NaN]。

    def get_coordinates(self, vehId, t, dsId):  
        if vehId == 0:
            nothing=[np.nan,np.nan]
           
            return nothing
        else:
            if self.T.shape[1] <= vehId - 1:  
                nothing=[np.nan,np.nan]
                return nothing
            vehTrack = self.T[dsId - 1][vehId - 1].transpose()
            if vehTrack.size == 0 or np.argwhere(vehTrack[:, 0] == t).size == 0:
                nothing=[np.nan,np.nan]
                return nothing
            else:
                x = vehTrack[np.where(vehTrack[:, 0] == t)][0, 1]
                y = vehTrack[np.where(vehTrack[:, 0] == t)][0, 2]
            return [x, y]

(4)定义函数gset_distance,功能是根据距离和半径设置距离值,如果距离大于半径则返回 0。    

    def set_distance(self, a,radius):
        if a > radius:
            return float(0)
        else:
            return a

(5)定义函数gcreate_adjancent_matrix_1,功能是根据车辆 ID、时间、数据集 ID、网格信息和半径创建邻接矩阵,返回包含帧 ID 和邻接矩阵的字典。

    def create_adjancent_matrix_1(self, vehId, t, dsId, grid, radius):
 
        lar = 39  # 3*13
        vehId_ind = round(lar/2)-1
        frame_ID_adj_mat_dict = {}
        grid[vehId_ind] = vehId.astype(int)
        grid_1 = [0,0]
        for i in grid:
            A=np.array(self.get_coordinates_1(i.astype(int),t,dsId))
            grid_1 = np.array(np.vstack((grid_1,A)))
        grid_1 = grid_1[1:]
        distance=np.array(pdist(grid_1, 'euclidean'))
        distance=np.array(np.nan_to_num(distance))
        adj_matrix_1 = np.array(squareform(distance))
 
        frame_ID_adj_mat_dict['frame_ID'] = t
        frame_ID_adj_mat_dict['adj_matrix'] = np.array(adj_matrix_1)  
        return frame_ID_adj_mat_dict

(6)定义函数gcreate_centrality,功能是根据邻接矩阵字典和帧 ID 计算中心性度量(包括接近中心性、度中心性和特征向量中心性),返回计算得到的中心性度量数组。   

    def create_centrality(self, frame_ID_adj_mat_dict, t):
        closeness_1 = []
        degree_1 = []
        eigenvector_1 = []
 
        if frame_ID_adj_mat_dict['frame_ID'] == t:
            G = nx.from_numpy_array(np.array(frame_ID_adj_mat_dict['adj_matrix']))
            closeness = nx.closeness_centrality(G)
            degree = nx.degree_centrality(G)
            eigenvector = nx.eigenvector_centrality(G, max_iter=100000)
            for dic_1 in closeness:
                closeness_1.append(closeness[dic_1])
            for dic_2 in degree:
                degree_1.append(degree[dic_2])
         
            for dic_3 in eigenvector:
                eigenvector_1.append(eigenvector[dic_3])
            
            return np.array(closeness_1), np.array(degree_1), np.array(eigenvector_1)
        else:
            return np.empty([0,39,3])

(7)定义函数gget_global_adjancent_matrix,功能是根据车辆 ID、时间、数据集 ID、网格信息和半径获取全局邻接矩阵列表,返回包含多个帧的邻接矩阵字典的列表。

    def get_global_adjancent_matrix(self, vehId, t, dsId, grid, radius):
        frame_ID_global_adjancent_list = []
        if vehId == 0:
            return np.empty([0, 2])
        else:
            if self.T.shape[1] <= vehId - 1: 
                return np.empty([0, 2])
            vehTrack = self.T[dsId - 1][vehId - 1].transpose()
            if vehTrack.size == 0 or np.argwhere(vehTrack[:, 0] == t).size == 0:
                return np.empty([0, 2])
            stpt = np.maximum(0, np.argwhere(
                vehTrack[:, 0] == t).item() - self.t_h)
            enpt = np.argwhere(vehTrack[:, 0] == t).item() + 1
            for item1 in range(stpt, enpt,2):
                t1 = vehTrack[item1, 0]
                frame_ID_adj_mat_dict = self.create_adjancent_matrix_1(
                    vehId, t1, dsId, grid, radius)
                frame_ID_global_adjancent_list.append(frame_ID_adj_mat_dict)
 
        return frame_ID_global_adjancent_list

(8)定义函数gget_all_adjancent_matrix_and_centrality,功能是根据车辆 ID、时间、数据集 ID、网格信息和半径获取所有帧的邻接矩阵和中心性度量,返回帧的邻接矩阵字典、接近中心性、度中心性和特征向量中心性的数组列表。 

    def get_all_adjancent_matrix_and_centrality(self, vehId, t, dsId, grid, radius):
        frame_ID_adj_mat_list = []
        closeness_list = []
        degree_list = []
        eigenvector_list = []
        if vehId == 0:
            return np.empty([0, 2])
        else:
            if self.T.shape[1] <= vehId - 1:  
                return np.empty([0, 2])
            vehTrack = self.T[dsId - 1][vehId - 1].transpose()
            if vehTrack.size == 0 or np.argwhere(vehTrack[:, 0] == t).size == 0:
                return np.empty([0, 2])
            stpt = np.maximum(0, np.argwhere(
                vehTrack[:, 0] == t).item() - self.t_h)
            enpt = np.argwhere(vehTrack[:, 0] == t).item() + 1
            for item1 in range(stpt, enpt,2):
                t1 = vehTrack[item1, 0]
                frame_ID_adj_mat_dict = self.create_adjancent_matrix_1(
                    vehId, t1, dsId, grid, radius)
                frame_ID_adj_mat_list.append(frame_ID_adj_mat_dict)
                closeness, degree, eigenvector = self.create_centrality(
                    frame_ID_adj_mat_dict, t1)
 
                closeness_list.append(closeness)
                degree_list.append(degree)
                eigenvector_list.append(eigenvector)
 
            closeness_list = np.array(closeness_list)
            degree_list = np.array(degree_list)
            eigenvector_list = np.array(eigenvector_list)
 
        return frame_ID_adj_mat_list, closeness_list, degree_list, eigenvector_list

(9)定义函数gget_torch_Tensor_list,功能是将邻接矩阵字典列表转换为 PyTorch Tensor 列表,并返回列表。

    def get_torch_Tensor_list(self,Mat_list):
        Numpy_array_list = []
        for item in Mat_list:
            matrix_list = item['adj_matrix']
            Numpy_array_list.append(np.array(matrix_list))
 
        return Numpy_array_list

(10)定义函数rate,功能是根据邻接矩阵列表、接近中心性、度中心性和特征向量中心性列表计算演化率,返回所有演化率的均值和中心性度量的均值。

    def rate(self, frame_ID_adj_mat_list, closeness_list, degree_list, eigenvector_list):
        num_of_agents = 39
        all_rates_list = []
        count = 1
        diags_list = []
 
        for list1 in frame_ID_adj_mat_list:
            adj = list1['adj_matrix']
            d_vals = []
            for item in adj:
                row_sum = sum(item)
                d_vals.append(row_sum)
            diag_array = np.diag(d_vals)
            laplacian = diag_array - adj
            L_diag = np.diag(laplacian)
            diags_list.append(np.asarray(L_diag))
        all_rates_arr = np.zeros_like(np.zeros([num_of_agents,1]))
        prev_ = diags_list[0]
 
        for items in range(1, len(diags_list)):
            next_ = diags_list[items]
            rate = next_ - prev_
            all_rates_arr = np.column_stack((all_rates_arr, rate))
            prev_ = next_
        all_rates_arr = np.delete(all_rates_arr, 0, 1)
        all_rates_list.append(all_rates_arr)  
    
        all_adjancent_matrix_mean = []
        all_rates_arr_1 = all_rates_list[0]
        for item in range(0, num_of_agents):
            avg=np.mean(all_rates_arr_1[item])
            all_adjancent_matrix_mean.append(avg)
        
        all_adjancent_matrix_mean = np.array(all_adjancent_matrix_mean)
        all_adjancent_matrix_mean=np.array(all_adjancent_matrix_mean)
        all_adjancent_matrix_mean =torch.Tensor(all_adjancent_matrix_mean)
        all_adjancent_matrix_mean = all_adjancent_matrix_mean.reshape(num_of_agents, 1)
        all_rates_list = np.array(all_rates_list)
 
        'closness mean'
        prev_ = closeness_list[0]
        all_rates_closeness_list = []
        for list2 in range(1, len(closeness_list)):
            next_ = closeness_list[list2]
            rate = [next_[i]-prev_[i] for i in range(0, len(prev_))]
            all_rates_closeness_list.append(rate)
            prev_ = next_
 
        all_rates_closeness_list = np.array(all_rates_closeness_list)
        all_rates_closeness_list = all_rates_closeness_list.reshape(num_of_agents,-1)
        all_closeness_mean = []
 
        for item1 in range(0, len(all_rates_closeness_list)):
            all_closeness_mean.append(np.mean(all_rates_closeness_list[item1]))
 
        all_closeness_mean = np.array(all_closeness_mean)
        all_closeness_mean = torch.Tensor(all_closeness_mean)
        all_closeness_mean = all_closeness_mean.reshape(num_of_agents, 1)
 
 
        'degree mean'
        prev_ = degree_list[0]
        all_rates_degree_list = []
 
        for list2 in range(1, len(degree_list)):
            next_ = degree_list[list2]
            rate = [next_[i]-prev_[i] for i in range(0, len(prev_))]
            all_rates_degree_list.append(rate)
            prev_ = next_
 
        all_degree_mean = []
        all_rates_degree_list = np.array(all_rates_degree_list)
        all_rates_degree_list = all_rates_degree_list.reshape(num_of_agents,-1)
 
        for item2 in range(0, len(all_rates_degree_list)):
            all_degree_mean.append(np.mean(all_rates_degree_list[item2]))
 
        all_degree_mean = np.array(all_degree_mean)
        all_degree_mean = torch.Tensor(all_degree_mean)
        all_degree_mean = all_degree_mean.reshape(num_of_agents, 1)
 
        'eigenvector mean'
        prev_ = eigenvector_list[0]
        all_rates_eigenvector_list = []
 
        for list3 in range(1, len(eigenvector_list)):
            next_ = eigenvector_list[list3]
            rate = [next_[i]-prev_[i] for i in range(0, len(prev_))]
            all_rates_eigenvector_list.append(rate)
            prev_ = next_
 
        all_eigenvector_mean = []
        all_rates_eigenvector_list = np.array(all_rates_eigenvector_list)
        all_rates_eigenvector_list = all_rates_eigenvector_list.reshape(num_of_agents,-1)
 
        for item3 in range(0, len(all_rates_eigenvector_list)):
            all_eigenvector_mean.append(
                np.mean(all_rates_eigenvector_list[item3]))
        all_eigenvector_mean = np.array(all_eigenvector_mean)
        all_eigenvector_mean = torch.Tensor(all_eigenvector_mean)
        all_eigenvector_mean = all_eigenvector_mean.reshape(num_of_agents, 1)
 
        return all_adjancent_matrix_mean, \
            all_closeness_mean, all_degree_mean, all_eigenvector_mean

   

未完待续

;