Hello, every exports. CVE-2025-2312 is a CVE related to cifs, kerbose and linux kernel modules. I think this is not really a problem. Briefly describe as this scene: driver pod | application pod ------------------------------------------------------------------------ mount cifs to /mnt | | access /mnt but connection is down reconnect and call cifs.upcall | cifs.upcall program switch to namespace of application pod |fetch cred and save in ccache use cred to connect | CVE say upcall to application pod will leak sensitive data from driver pod. I don't think this attack is right. Nobody except driver pod can know detail about cred and I don't know how to leak sensitive data. Is there any scene support this CVE? Wish for response, thx. > -------- Forwarded Message -------- > Subject: Re: Regarding service request 1841915 > Date: Mon, 7 Apr 2025 11:53:58 -0300 > From: Pedro Sampaio <psampaio@xxxxxxxxxx> > To: zhangjian (CG) <zhangjian496@xxxxxxxxxx> > > Hello, > > I'm not familiar with that behavior for cifs. > > But the point for the vulnerability in my understanding is the > crossover of namespaces triggered from the pod's namespace. Even if > details can be known by the pod, unintended behavior can crossover > between namespaces. > > I'm not sure if I was clear. I apologize as I'm not an expert in cifs > or kerberos. Maybe a consultation from upstreams mailing lists is a > better option for this discussion. > > Regards, > > On Thu, Apr 3, 2025 at 1:30 AM zhangjian (CG) <zhangjian496@xxxxxxxxxx> > wrote: >> >> Hello, >> >> I'm more familier with nfs, maybe there is some misunderstanding for me. >> In nfs, if path /x is mounted on host node a, and expoted as /y in pod >> b. Pod b who can see path /y can also access it, without caring abort >> who created kerbose cred or where it is stored. Pod b could know no >> details in the cred. If /x in mounted in pod b and exported to host >> node(, just suppose it, though it couldn't happen) as /y. Then host user >> can access it without knowing any detail in cred. >> Only kernel and host or pod who create the cred know details about cred. >> >> I think user who use cifs can't know detail about cred as mentioned >> above. Is this also valid in cifs? >> >> On 2025/4/2 3:17, Pedro Sampaio wrote: >>> Hello, >>> >>> This is the attack scenario from the original vulnerability report, >>> quoted in its entirety: >>> >>> --- >>> "In some cases, like described below, cifs.upcall program from the >>> cifs-utils package makes an upcall to the wrong namespace in >>> containerized environments. >>> >>> Consider the following scenario: >>> A CIFS/SMB file share is mounted on a host node using Kerberos >>> authentication. During the session setup phase, the Linux kernel's >>> cifs.ko module makes an upcall to user space to retrieve the Kerberos >>> service ticket from the credential cache. >>> >>> In typical (non-container) environments, this process works correctly, >>> but in containerized environments, the upcall may be directed to a >>> different namespace than intended, leading to issues. For example: >>> >>> (1) The file share is mounted on the host node at /mnt/testshare1, >>> meaning the Kerberos credential cache is stored in the host's >>> namespace. >>> (2) Docker container is created, and the file share path >>> /mnt/testshare1 is exported to the container at /sharedpath. >>> (3) When the service ticket expires and the SMB connection is lost, >>> before the ticket is refreshed in the credential cache, an application >>> inside the container performs a file operation. This triggers the >>> kernel to attempt a session reconnect. >>> (4) During the session setup, a Kerberos ticket is needed, so the >>> kernel invokes the cifs.upcall binary using the request_key function. >>> However, cifs.upcall switches to the namespace of the caller (i.e., >>> the container), causing it to attempt to read the credential cache >>> from the container's namespace. But since the original mount happened >>> in the host namespace, the credential cache is located on the host, >>> not in the container. This results in the upcall failing to access the >>> correct credential cache or accessing credential cache which doesn't >>> belong to the correct user." >>> --- >>> >>> My initial analysis concluded that the CVE assignment is still valid. >>> A clear attack vector was presented in the original report and >>> although the fix might seem odd, the vulnerability's existence does >>> not depend on the type or format of the fix. >>> >>> The race condition between the times when the Kerberos ticket expires >>> and the reconnect is made is a security relevant event and may lead to >>> a leak. Even if the risk is low, that also does not influence the >>> decision to assign a CVE ID. >>> >>> With that said, we are maintaining the CVE assignment at this moment. >>> >>> If you wish to send more evidence, we can continue this discussion. >>> You can also appeal this decision through Mitre's Top Level Root CNA. >>> >>> Regards, >>> >>> On Tue, Apr 1, 2025 at 3:19 AM zhangjian (CG) <zhangjian496@xxxxxxxxxx> wrote: >>>> >>>> Hello,is there any progress on this request? >>>> >>>> On 2025/3/28 20:03, Pedro Sampaio wrote: >>>>> Hello, >>>>> >>>>> Thank you for submitting this dispute request. >>>>> >>>>> We'll analyze your claim and will report back as soon as an update is available. >>>>> >>>>> Regards, >>>>> >>>>> On Thu, Mar 27, 2025 at 11:14 PM 'zhangjian (CG)' via >>>>> CNALR-Coordination@xxxxxxxxxx <cnalr-coordination@xxxxxxxxxx> wrote: >>>>>> >>>>>> Hello. >>>>>> We have disagreements in our internal discussions for whether LTS >>>>>> version os should introduce the fix patch for CVE-2025-2312. The key >>>>>> point is that some committers don't think this should be tagged CVE. We >>>>>> advice to reject CVE-2025-2312. >>>>>> There are some reasons for rejecting it: >>>>>> >>>>>> 1. CVE-2025-2312 describes a case: when mount cifs in driver pod but >>>>>> access cifs path in application pod, which pod should cifs.upcall get >>>>>> kerbose cred from ? Or to say which namespace should send upcall to ? >>>>>> CVE-2025-2312 think sending upcall application pod is a trouble, which >>>>>> may leak information from application pod. But we don't think it is a >>>>>> problem. You can also say sending upcall to driver pod may leak >>>>>> information from driver pod. >>>>>> >>>>>> 2. Fix patch add a mount option CIFS.UPCALL to allow user to choose >>>>>> whether sending upcall to driver pod or application pod. But Sending >>>>>> upcall to application pod is still the default behavour, which infers >>>>>> sending upcall to application pod is a normal behavour. >>>>>> >>>>>> 3. Adding mount option to help user choose whether fixing CVE or not is >>>>>> odd for us. >>>>>> >>>>>> 4. This is more like a new feature. It can be supported in cifs-utils >>>>>> and kernel in next release version. LTS versions os can avoid leaking by >>>>>> designing or restricted behavour. It is unneccesory for LTS versions to >>>>>> support new mount option. >>>>>> >>>>>> Thank for review. Wish for replay. >>>>>> >>>>>> >>>>>> On 2025/3/28 9:20, CVE Request wrote: >>>>>>> Hello, >>>>>>> >>>>>>> >>>>>>> >>>>>>> Regarding your CVE service request, logged on 2025-03-25T23:43:35, we have the following question or update: >>>>>>> >>>>>>> >>>>>>> >>>>>>> To update CVE-2025-2312 you must contact the assigning CNA - the Red Hat CNA-LR. >>>>>>> >>>>>>> https://www.cve.org/CVERecord?id=CVE-2025-2312 >>>>>>> >>>>>>> https://www.cve.org/PartnerInformation/ListofPartners/partner/redhat >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Please do not hesitate to contact the CVE Team by replying to this email if you have any questions, or to provide more details. >>>>>>> >>>>>>> >>>>>>> >>>>>>> Please do not change the subject line, which allows us to effectively track your request. >>>>>>> >>>>>>> >>>>>>> >>>>>>> CVE Assignment Team >>>>>>> >>>>>>> >>>>>>> >>>>>>> M/S M300, 202 Burlington Road, Bedford, MA 01730 USA >>>>>>> >>>>>>> >>>>>>> >>>>>>> [A PGP key is available for encrypted communications at >>>>>>> >>>>>>> >>>>>>> >>>>>>> http://cve.mitre.org/cve/request_id.html] >>>>>>> >>>>>>> >>>>>>> >>>>>>> {CMI: MCID15208901} >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>> >>> >>> >> > >