Quantcast
Channel: Media Foundation Development for Windows Desktop forum
Viewing all 1079 articles
Browse latest View live

IMFSourceReader to play back H264 from network (RTP)

$
0
0

I have implemented a custom media source (IMFMediaSource and IMFMediaStream) to feed packets for a IMFSourceReader created with MFCreateSourceReaderFromMediaSource.

I have followed the custom media source guide on MSDN and have set MFMEDIASOURCE_IS_LIVE, as well as starting timestamps at 0 and incrementing via RTP timestamp, in units of 100s of nanoseconds.

The source reader seems to be pulling for new samples OK, as it's calling my RequestSample method and I am queueing IMFSamples as needed.

However the problem so far is that my IMFSourceReaderCallback::OnReadSample is not getting called. I believe the callback is setup properly because I got an E_FAIL when I delivered packets in probably the wrong way. But now I get no errors. I am calling ReadSample asynchronously on the source reader just after I deliver my custom media source data.

It seems like MFTrace might help but I can't figure out how to use this on metro (by the way this is a metro/win8 app). If I try and run mftrace from the command line it says bad exe format.


libraries cannot be read

$
0
0

I'm using an "old" ide for developping win32 applications. I have downloaded both the 6.1 as the 8.1 windows sdk. My ide cannot use the libraries shipped with the 8.1 sdk. The headers of the 6.1 sdk are not complete so compilation is not successfull. What can i do?

Regards Eugene


Capture video from webcam and write as ismv (fragmented mp4)

$
0
0

Hello,

I want to capture video from the media device (Webcam) and write stream in the form of ismv (framented mp4).

I have used Microsoft media foundation tool to capture video.

And I have smooth streaming format sdk api for adding stream to convert into fragmented mp4.

But I am not getting how to send the captured stream to smooth streaming format sdk api SSFMuxCreate. 

Please suggest for the same.

Thanks


Media Foundation H264 encoder - IMFTransform::SetOutputType() return MF_E_ATTRIBUTENOTFOUND (0xC00D36E6L)

$
0
0

I want to use Media Foundation H264 encoder to encode some Sample. But I'm failed while I call IMFTransform::SetOutputType() method and get MF_E_ATTRIBUTENOTFOUNDerror code.

The step I use the H264 encoder are below:

1. Use MFTEnumEx() function to enum MFT_CATEGORY_VIDEO_ENCODERwith MajorType is MFMediaType_Video, SubType is MFVideoFormat_H264, after this done, I get anIMFActivate interface

2. Use the member method of IMFActivate::ActivateObject() with RIID - IID_IMFTransformto get an IMFTransform interface

3. Use MFCreateMediaType() function to create a IMFMediaTypeinterface.

4. Use IMFTransform::GetOutputAvailableType() member function and passed an interface I get in step 3 to get one available IMediaType.

5. Use IMFMediaType::SetUINT32() member method to set MF_MT_AVG_BITRATEon IMediaType interface I get in previous step (500kbps)

6. Use MFSetAttributeSize() function to set MF_MT_FRAME_SIZEon IMediaType interface I get in previous step (320 * 240)

7. Use MFSetAttributeRatio() function to set MF_MT_FRAME_RATEon IMediaType interface I get in previous step (30, 1)

8. Use IMFTransform::SetOutputType() to set the MediaType with theIMediaType interface I get in previous step with outputStream 0 and dwFlag 0. I get the MF_E_ATTRIBUTENOTFOUNDerror code after this member method is done.

Could anyone please give me some hint about what's wrong with my steps?

Thanks a lot.

IMFTransform problem in decoding h.264

$
0
0

Help to see if there is any problem with the following code.   ppDecoder->processout returns no s_ok, and at last no video data got out.

IMFActivate **ppActivate = NULL;
UINT32 count = 0;
IMFTransform *ppDecoder;

GUID subtype = GUID_NULL;

      

       
hr = imfmediatype->GetGUID(MF_MT_SUBTYPE, &subtype);

    MFT_REGISTER_TYPE_INFO info = { 0 };
 MFT_REGISTER_TYPE_INFO info_2 = { 0 };
    info.guidMajorType = MFMediaType_Video;
    info.guidSubtype = subtype;

 info_2.guidMajorType = MFMediaType_Video;
    info_2.guidSubtype = MFVideoFormat_RGB32;
 
   hr = MFTEnumEx(
        MFT_CATEGORY_VIDEO_DECODER,
        MFT_ENUM_FLAG_SYNCMFT ,
        &info,      // Input type
        &info_2,       // Output type
        &ppActivate,
        &count
        );


    if (SUCCEEDED(hr) && count == 0)
    {
        return hr;
    }

   

    if (SUCCEEDED(hr))
    {
        hr = ppActivate[0]->ActivateObject(IID_PPV_ARGS(&ppDecoder));
    }

    for (UINT32 i = 0; i < count; i++)
    {
        ppActivate[i]->Release();
    }
   

hr = ppDecoder->SetInputType(0,imfmediatype,0);

IMFMediaType* pOutputType = NULL;  
 HRESULT hrRes = S_OK;

    GUID guidMajorType = GUID_NULL, guidSubType = GUID_NULL;
 
 
        for ( DWORD dwTypeIndex = 0; (hr != MF_E_NO_MORE_TYPES) ; dwTypeIndex++ )
        {
            hr =  ppDecoder->GetOutputAvailableType(
                                                0,
                                                dwTypeIndex,
                                               &pOutputType);

            if (pOutputType && SUCCEEDED(hr))
            {
                hr = pOutputType->GetMajorType( &guidMajorType );

                hr = pOutputType->GetGUID( MF_MT_SUBTYPE, &guidSubType );

              
           
                if((guidMajorType == MFMediaType_Video) && (guidSubType == MFVideoFormat_RGB32))
                {
                    hr =  ppDecoder->SetOutputType(0, pOutputType, 0);
                    break;
                } 

            }
            else
            {
                //Output type not found

                hr = E_FAIL;
                break;
            }
        }

hr = ppDecoder->ProcessMessage(MFT_MESSAGE_NOTIFY_BEGIN_STREAMING,NULL);
 
 hr = pSample->SetUINT32(MFSampleExtension_Discontinuity, TRUE);

 hr = ppDecoder->ProcessInput(0,pSample,0);

 MFT_OUTPUT_STREAM_INFO osi;

 hr = ppDecoder->GetOutputStreamInfo(0, &osi);

IMFMediaBuffer *pBuffer_test = NULL;

// hr = MFCreateMemoryBuffer(bytes , &pBuffer);
 hr = MFCreateMemoryBuffer(osi.cbSize, &pBuffer_test);

 pBuffer_test->SetCurrentLength(osi.cbSize);

IMFSample *outputSample=NULL;
 //hr = MFCreateVideoSampleFromSurface(NULL,&outputSample);
 hr = MFCreateSample(&outputSample);
 outputSample->AddBuffer(pBuffer_test);

 outputSample->AddRef();

 MFT_OUTPUT_DATA_BUFFER outputDataBuffer;
    ZeroMemory(&outputDataBuffer, sizeof(outputDataBuffer));
 
 DWORD processOutputStatus = 0;

 outputDataBuffer.dwStreamID = 0;
    outputDataBuffer.pSample = outputSample;
 //   outputDataBuffer.dwStatus = 0;
 //   outputDataBuffer.pEvents = NULL;


 hr = ppDecoder->GetOutputStatus(&m1);
 do
 {hr = ppDecoder->ProcessOutput(0,1,&outputDataBuffer,&processOutputStatus);
 }
 while(hr!= MF_E_TRANSFORM_NEED_MORE_INPUT);

 IMFSample* pBitmapSample = NULL;

 hr = MFCreateSample(&pBitmapSample);
 hr = pBitmapSample->AddBuffer(pBuffer_test);
 IMFMediaBuffer *buffer_test_2 =NULL;
 pBuffer_test->Release();
 pBuffer_test = NULL;
 //hr = pBitmapSample->ConvertToContiguousBuffer(&buffer_test_2);
    hr = outputDataBuffer.pSample->ConvertToContiguousBuffer(&buffer_test_2);

 BYTE *pData_test = NULL;
 DWORD cbTotalLength;
 DWORD cbCurrentLength;
 hr = buffer_test_2->Lock(&pData_test, &cbTotalLength, &cbCurrentLength);
 //hr = pBuffer_test->Lock(&pData_test, &cbTotalLength, &cbCurrentLength);
 IMFMediaType *pMediaType_test;
 hr =  ppDecoder->GetOutputCurrentType(0, &pMediaType_test);


 INT32 stride_test = 0;
 UINT32 m_Width;
 UINT32 m_Height;
 LONG stride_test_2=0;
    //Get the Frame size and stride through Media Type attributes

    hr = MFGetAttributeSize(pMediaType_test, MF_MT_FRAME_SIZE, &m_Width, &m_Height);

    hr = pMediaType_test->GetUINT32(MF_MT_DEFAULT_STRIDE, (UINT32*)&stride_test);

 hr = pMediaType_test->GetGUID(MF_MT_SUBTYPE, &subtype);

    hr = MFGetStrideForBitmapInfoHeader(subtype.Data1, m_Width, &stride_test_2);

 Bitmap* m_pBitmap;
    //Create the bitmap with the given size
    m_pBitmap = new Bitmap(m_Width, m_Height, (INT32)stride_test_2, PixelFormat32bppRGB, pData_test);

How to use SourceReader (for H.264 to RGB conversion)?

$
0
0
I'm trying to move from DirectShow land to Media Foundation and an trying to build around some existing code. Unfortunately I've run in some issues.

Basically what I want to do is take an MPEG4 video that is encoded in H.264 and be able to convert it to RGB or YUV. I can then take that raw YUV or RGB data so that I can make an OpenGL texture.

I've concluded that I need to use the SourceReader and have the IMFSourceReader::ReadSample method set up and returning samples. The problem is that I can't seem to be able to set the subtype to be MEDIASUBTYPE_UYVY or _RGB24 like I could in DirectShow.
Basically I get an MF_E_INVALIDMEDIATYPE error when I do IMFSourceReader::SetCurrentMediaType with my media type (of which I've only set MF_MT_MAJOR_TYPE and MF_MT_SUBTYPE)

Reading http://msdn.microsoft.com/en-us/library/dd797815%28VS.85%29.aspx gives me the impression that I can't even use those subtypes anyway. If that is the case, then is is possible to convert the H.264 to RGB?

Which leads me to my next question. Basically all I want is raw RGB (24bit) data so that I can create an OpenGL texture. Would I be best to:
- follow the instructions at the end of http://msdn.microsoft.com/en-us/library/dd389281%28VS.85%29.aspx#why_use_source_reader and set up the DirectX components then...
- ultimately receive IDirect3DSurface9 objects of which I can call GetDC on it and retrieve the raw RGB data which I can then pass to my OpenGL texture.

Is this even possible? I'm getting a little lost amongst all the documentation at the moment and just want to try and cut down what to focus on.

Thanks

- Edit
It occurred to me that trying to get OpenGL and DirectX to share the graphics card is not a good idea. Instead I'll just get the H.264 to output to MEDIASUBTYPE_YUY2 and convert it to RGB with an OpenGL shader (or something).

Am I right in thinking that MediaFoundation is still limited in video support i.e. it doesn't support MP4v?

SetInputType fails on Hardware MFT

$
0
0

I have code that activates the Intel M-JPEG Decoder MFT.  Calling SetInputType on the MFT return E_FAIL.  Is there another attribute that must be set on the MFT?  This code works on synchronous MFT from Microsoft.  For clarity, I've removed looking at the hr return, and dealing with errors.

MFTEnumEx(MFT_CATEGORY_VIDEO_DECODER, MFT_ENUM_FLAG_HARDWARE, &inputTypeGUID, &outputTypeGUID, &availableMFTs, &numMFTsAvailable);

availableMFTs[0]->ActivateObject(IID_PPV_ARGS(&m_mjpegMFT));

UnlockAsyncMFT( m_mjpegMFT );  // Comment this out for MFT_ENUM_FLAG_SYNCFMT

mfVideoFormat.dwSize = sizeof(mfVideoFormat);

mfVideoFormat.guidFormat = MFVideoFormat_MJPG;

mfVideoFormat.videoInfo.dwWidth = m_imageWidth;

mfVideoFormat.videoInfo.dwHeight = m_imageHeight;

MFCreateVideoMediaType(&mfVideoFormat, &pVideoMediaTypeInput);

// This next call fails,  hr = E_FAIL

hr = m_mjpegMFT->SetInputType(0, pVideoMediaTypeInput, 0);   // FAILS

======================

I've queried the MFT for it's attributes, and it is indeed a video decoder.  I've added this code after activating the MFT ...

IMFMediaType* pMediaType;

hr = m_mjpegMFT->GetInputAvailableType( 0, 0, &pMediaType );

And pMediaType shows in the debugger

MF_MT_MAJOR_TYPE=MFMediaType_Video
MF_MT_SUBTYPE = CLSID = 0x01cb9e38 {47504A4D-0000-0010-8000-00AA00389B71} "MJPG"}

=============================

It seems like there is some other piece of state that needs to be set.  Does anyone know what is missing?

Thanks.

Topoedit error: Topoedit.exe is not a valid win32 application

$
0
0

Hi,

I have recently downloaded and installed the Windows SDK to start some Windows desktop media application development. If it helps give context for an answer, I only have Windows 7 so I downloaded that version of the software, not the latest Windows 8 version.

To get things started I wanted to experiment with building some topologies using Topoedit. However, when I tried to run the application the following error message immediately appeared: Topoedit.exe is not a valid win32 application. Consequently I could not use it. None of the different topoedit versions (for different architectures as well) that downloaded with the SDK worked. I did however manage to use graphedit.exe ok, but I would still like to validate my topology in topoedit.

So my question is: How can I fix my error so that I can use topoedit? Also, is it strictly necessary to use topoedit if I can still get graphedit to work, will it have an impact on my application design / debugging capabilities?

Thanks in advance.


How to tear down the topology connection in media foundation and change video resolution in vc++?

$
0
0

Hi,

I'm working on media foundation to show preview from USB camera.I used Media Session to show preview.I built partial  topology such as follows : Video Capture Device(Source node)->EVR renderer(Output node)

I enumerated supported video resolution using presentation descriptor. Now i want to change video resolution and format.Before change video resolution,

     1)Stop the preview

     2) I want to remove only the connection between source node and output node.

     3) After set the format,Reconnect it and start streaming.

Above 3 steps i have to do it in media foundation.

In directshow,we used to enumerate no. of Pins available and removing it one by one.How do i it in media foundation?

I am new to media foundation.I am struggling with this issue for past 3 days.Please help me out from this issue.

Thanks in advance.


Does Windows take the most exceedingly practical method of displaying image as a secret ? Such that can't the ordinary people like me use it ?

$
0
0

I know an app of machine vision that takes 0% CPU usage rate while displaying very high resolution image(2000x5000, 10000x10000 , and higher, of cource with two scrolls) captured by an industrial camera.

I had tried all usual methods of displaying such as "Windows GDI", "Opengl texture", "opengl FBO", "Opengl PBO", "DX3D texture", "DirectShow", "DXVA2", and no one makes that performance. 

possibly, I just didn't grab the essence of these methods.

The following is the amazing app's performace

Note that:

1.the camera is shooting a conveyer belt that carries some commodities with a barcode.

2.Images are captured by an image capture card, so that capturing is not a thing that uses CPU.

Use Work Queues or regular threads for Vista compatible WASAPI?

$
0
0

We have an application which has used WASAPI for years with both Exclusive and Shared Event driven mode.  In exclusive mode, we use the 3ms buffer.  If that is too small for our users' machines, we tell them to use Shared mode.  The rendering thread is just a std::thread which waits for the samples ready event and calls GetBuffer/ReleaseBuffer.  I have recently discovered that AvSetMmThreadCharacteristics() simply does not work in our particular situation.  I can't replicate the result in a smaller test app, so it must have something to do with using a .NET front end with a C++ COM dll back end or perhaps some other particular.  My testing methods were basically checking to see how easy it is to make it underrun, and it is incredibly easy unless by some other means I make the rendering thread's priority higher.  One method is sticking AvSetMmThreadCharacteristics() in the main thread.  I don't know why that works when calling it from the rendering thread does not.  In any case, while researching this I am seeing Microsoft telling us that using Work Queues is preferred over making our own threads.  A lot of the MF functions that make this more manageable are Windows 8 minimum, and we need to maintain backward compatibility with Vista.  I cannot find a WASAPI example which uses Work Queues which is Vista compatible.  Is there one?  If Vista compatibility is a requirement, then are Work Queues still preferred?  I managed to make a test app that uses Work Queues, but I cannot for the life of me figure out how to make the priority of my worker threads high enough to avoid underruns.  In my test app, creating my own thread and using AvSetMmThreadCharacteristics() made underruns nearly impossible to induce.  My inclination now is to just slog through and try to figure out why AvSetMmThreadCharacteristics() isn't working in our main application (where it works in our test app).  Any advice would be most appreciated.

IMF Source Reader leaking/consuming memory

$
0
0
I don't know if it's not source reader basic behavior, but even thoug I am releasing every sample it keeps consuming memory up to size of file that is being red.

Source reader:
    hr = pAttributes->SetUINT32(MF_SOURCE_READER_ENABLE_VIDEO_PROCESSING, TRUE);
    hr = MFCreateSourceReaderFromURL(wszFileName, pAttributes, &m_pReader);
Code:

/*in loop*/

hr = m_pReader->ReadSample( (DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM, 0, NULL,&dwFlagss,&time,&pSampleTmp ); SafeRelease(&pSampleTmp);

Is there some attribute to set or there is nothing I can do about this and have to use MediaSession or MFPlayer? If so could you propose how to get individual samples from them? (with DXVA would be best but I think I can solve this)

IMFSourceReader and MFVideoFormat_NV12 subtype

$
0
0

I'm trying to get the RGB frames from a video using the IMFSourceReader, all works fine so far; but I've noticed the frame format returned by the IMFSample interface when the output subtype is MFVideoFormat_NV12 does not correspond with the description given in the documentation on:

http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750%28v=vs.85%29.aspx

and actually corresponds to the YV12 format given on the same page.

That is, the NV12 data returned does not have the interleaved UV and channels, but instead has the halfsize U block and then the halfsize V block.

Just to be clear: this isn't a problem as I can handle both formats, but I'd like to know whether this a peculiarity of my machine or a consistent inconsistency(!) for when the software is deployed.

I'm running windows 7. MF_SDK_VERSION 0x0002 and MF_API_VERSION 0x0070

Source Reader ReadSample(), IMFSourceReaderCallback::OnReadSample() memory leak?

$
0
0

First Execute

hr = m_pReader->ReadSample(
        (DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM,
        0,
        NULL,   // actual
        NULL,   // flags
        NULL,   // timestamp
        NULL    // sample
        );

Then at the beggining of the following, the memeory used by this process adds about 3M:

HRESULT CCapture::OnReadSample(
    HRESULT hrStatus,
    DWORD /*dwStreamIndex*/,
    DWORD /*dwStreamFlags*/,
    LONGLONG llTimeStamp,
    IMFSample *pSample      // Can be NULL
    )
{
    .....

}

How about this?

   

Media Foundation - How to use custom wav byte stream parser

$
0
0

Hi,

I am working on a win32 MFT application and I would like to use my custom .WAV byte stream parser instead of OS in-built wav parser. the application always uses the in-built wav parser even though I added the transform node of my custom .wav byte parser and the custom decoder . Can you suggest how I can use custom .wav parser in my MF application?


DX11 Video Renderer sample code is incomplete.

$
0
0
Your ref : http://code.msdn.microsoft.com/windowsdesktop/DirectX-11-Video-Renderer-0e749100

Dear Sirs,

    So ... that project creates a DLL, which isn't a standard EVR so I have no clue how to fire it up.

Feeback states that the only way to test it is to use a closed source version of topedit in the Windows 8 SDK.

That isn't very helpful as that doesn't demonstrate how to use the thing.

Please provide sample code - the simpler the better that demonstrates how to use this DirectX11 Video Renderer or a derivative to throw a video texture on some simple geometry on the screen.

As a follow up, please demonstrate multiple video textures playing simultaneously to demonstrate the API supports this feature. If it doesn't, please add this support :)

Sorry to give you a hard time but I need a solid video API and if Windows doesn't provide one, it's time to look to other operating systems for a robust solution.

Regards,
Steve.

sometimes readsample() do not let onreadsample() happen in asynchronous mode?

$
0
0
when i debug my program, i find that sometimes readsample() do not let onreadsample() happen in asynchronous mode.  Hot to know if a readsample() do not let onreadsample() happen?

IMFCaptureEngineOnSampleCallback::OnSample callback stalls :(

$
0
0

Hello,

New to MediaFoundation's video capture API here. But I have an app that performs a video capture preview of a webcam. I picked up most of the ideas from this sample code (http://code.msdn.microsoft.com/windowsapps/Media-Capture-Sample-adf87622).

My app is different in that I need to receive the raw video samples and process them myself. Looking through the docs, I see one can get this via IMFCapturePreviewSink::SetSampleCallback. However when I do this, my callback gets called exactly 10 times only. This always happens and I'm out of ideas here. Any help would be most appreciated.

Code snippet below, error handling omitted for brevity:

CComPtr<IMFAttributes> pAttribs;
HRESULT hr = MFCreateAttributes(&pAttribs, 1);

CComPtr<IMFCaptureEngineClassFactory> pFactory;
hr = CoCreateInstance(CLSID_MFCaptureEngineClassFactory, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pFactory));

// Create capture engine
CComPtr<IMFCaptureEngine> pCaptureEngine;
hr = pFactory->CreateInstance(CLSID_MFCaptureEngine, IID_PPV_ARGS(&pCaptureEngine));

// captureEngineCB, and pActivateDevice already exist.
hr = pCaptureEngine->Initialize(&captureEngineCB, pAttribs, NULL, pActivateDevice);

// Create the capture sink
CComPtr<IMFCaptureSink> pCaptureSink;
hr = pCaptureEngine->GetSink(MF_CAPTURE_ENGINE_SINK_TYPE_PREVIEW, &pCaptureSink);

// Get the capture preview sink
CComPtr<IMFCapturePreviewSink> pCapturePreviewSink;
hr = pCaptureSink->QueryInterface(IID_PPV_ARGS(&pCapturePreviewSink));

// Get the source so we can get/set the current media type
CComPtr<IMFCaptureSource> pCaptureSource;
hr = pCaptureEngine->GetSource(&pCaptureSource);

CComPtr<IMFMediaType> pMediaType;
hr = pCaptureSource->GetCurrentDeviceMediaType(MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_VIDEO_PREVIEW, &pMediaType);

hr = pMediaType->SetUINT32(MF_MT_ALL_SAMPLES_INDEPENDENT, TRUE);


DWORD dwSinkStreamIndex;
hr = pCapturePreviewSink->AddStream((DWORD)MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_VIDEO_PREVIEW, pMediaType, NULL, &dwSinkStreamIndex);


// Set ourselves to receive the video samples via OnSample callback.
hr = pCapturePreviewSink->SetSampleCallback(dwSinkStreamIndex, this);

// Now start the preview.
hr = pCaptureEngine->StartPreview();


How to display high resolution video frames with low time consumption

$
0
0

My app takes very high CPU usage rate and very long time when displaying a frame of size 2000x5000 or 10000x10000 captured by a camera 

What is the method I should use?

Invoke executed by queue created with MFAllocateWorkQueue stopped to run Release on passed object

$
0
0

Hello Folks,

Quite strange situation. I have my own source and to improve performance switched from MFASYNC_CALLBACK_QUEUE_STANDARD to  custom one. Initially I had problem with performance but not with memory. After I allocated my own queue performance problem is gone but I got memory leak.

Here some code:

HRESULT MjpegSource::SendOperation(SourceOperationType operationType)
{
HRESULT hr = S_OK;
CComPtr<ISourceOperation> pOperation;

do
{
// create a new SourceOperationType command
SourceOperation::CreateInstance(&pOperation, operationType);

// queue the command on the queue
hr = MFPutWorkItem(workQueue_, this, static_cast<IUnknown*>(pOperation));
BREAK_ON_FAIL(hr);
}
while(false);

return hr;
}

And Invoke:

HRESULT MjpegSource::Invoke(IMFAsyncResult* pResult)
{
HRESULT hr = S_OK;
CComPtr<IMFAsyncResult> pCallerResult = pResult;
CComPtr<ISourceOperation> pCommand;
CComPtr<IUnknown> pState;

do
{
CComCritSecLock<CComAutoCriticalSection> lock(_critSec);

// Get the state object associated with this asynchronous call
hr = pCallerResult->GetState(&pState);
BREAK_ON_FAIL(hr);

// QI the IUnknown state variable for the ISourceOperation interface
hr = pState->QueryInterface(IID_ISourceOperation, (void**)&pCommand);
BREAK_ON_FAIL(hr);

// Make sure the source is not shut down - if the source is shut down, just exit
hr = CheckShutdown();
BREAK_ON_FAIL(hr);

// figure out what the requested command is, and then dispatch it to one of the 
// internal handler objects
switch (pCommand->Type())
{
case SourceOperationOpen:
hr = InternalOpen(pCommand);
break;
case SourceOperationStart:
hr = InternalStart(pCommand);
break;
case SourceOperationS"Apple-tab-span" style="white-space:pre;">hr = InternalStop();
break;
case SourceOperationPause:
hr = InternalPause();
break;
case SourceOperationStreamNeedData:
hr = InternalRequestSample();
break;
case SourceOperationEndOfStream:
hr = InternalEndOfStream();
break;
}
}
while(false);

return hr;
}

As I found for "pCommand" object destructor is stopped to execute. Once I put back MFASYNC_CALLBACK_QUEUE_STANDARD

method Release() of that object executed again, but with custom never. What is more interesting if I slowly step my application in debugger with custom queue  Release() get executed, but never in real mode. I am completly crashed here and have no idea what is going on. Oh, also once I stop my application I see all object started to execute Release() but it is quite understandable 

Any help greatly appreciated

Regards

Aleksey

Viewing all 1079 articles
Browse latest View live